Open aesculus opened 2 weeks ago
It sounds like you did not use git clone
to install the Dashboard, or perhaps when you did the overwrite it nuked all the git metadata. In any case, getting that git repo working again for upgrade to use won't be easy. I'm not even sure how to begin. You can use the manual process as you have done in the past an re-run setup.sh
. Setup is designed to allow you to run it as often as you need to change configurations. However, it doesn't download latest file (git pull
) which is what upgrade does.
If you want to set up the installation to work with the upgrade script for the future, you could re-install via git clone
in a new directory and copy over the influxdb folder. Then run setup.
Yes. When I first installed it I downloaded directly from GiT and then had to set the execute flags on the SH files.
I will ponder to either copy and run setup again, or bite the bullet and rename the folder, use Git for the pull and then copy over the influxdb folder from the renamed folder.
BTW I will also need to rebuild the pypowerwall again as I am running a special version to eliminate negative solar values.
Oh I just wish Tesla would leave will enough alone, but I have been expecting this for a long time so ...
BTW I will also need to rebuild the pypowerwall again as I am running a special version to eliminate negative solar values.
I've debated on adding that to the pypowerwall container code. My design approach has generally been to "pass through" the data as it is reported so that if there is an issue (bad CT setup, firmware code) it would show that. But I understand the design to have a clean feed for "things that make sense". I suppose we could add a NO_NEG_SOLAR parameter that forces the container code to report zero for negative values, that way it is an option that you can set and still use the project setup and upgrade code.
I think this is a super idea. I was just preparing to rebuild the latest pypowerwall with the modified algo.
# Meters - JSON
# mod for negative loads on solar circuits
jsonpayload = pw.poll('/api/meters/aggregates')
# Parse JSON string
data = json.loads(jsonpayload)
# Update the "instant_power" value in "solar" and "load"
if data["solar"]["instant_power"] < 0:
data["load"]["instant_power"] += abs(data["solar"]["instant_power"])
data["solar"]["instant_power"] = 0
# Convert back to JSON string
message = json.dumps(data)
#message: str = pw.poll('/api/meters/aggregates', jsonformat=True)`
BTW I think I mentioned this before but Tesla strips out all negative solar in the app. My modded values matched theirs exactly in all scenarios.
data["load"]["instant_power"] += abs(data["solar"]["instant_power"])
Interesting. Was this transfer of the negative solar flow over to load (home), also verified in your testing vs Tesla app?
All the values (home, solar, pw, grid) seemed to line up close to perfect between the app and the modded pypowerwall.
I do have one thing that is different than other installs though: The circuit that has the solar that goes negative (lights) also is measured on a load CTs. It could end up causing a small problem if it had a large load during solar production, but that is never the case. I tried toggling it off during heavy solar production but it did not seem to effect the home value. Made me scratch my head a bit. Not sure how that ends up working OK unless Telsa can see that CT (one of the 4 sets) and throws away the solar production from the home load.
But I doubt anyone else has two CTs on the same circuit so this is a unique situation. It resulted from a solar roof in a building 400 feet away that in theory has loads on it, but those loads are almost entirely lighting so if its negative, its solar production and if positive, lighting. Don't often use the lights during daylight hours. :-)
Thanks @aesculus - I've created a PR for this https://github.com/jasonacox/pypowerwall/pull/113
You can test the new container, by editing powerwall.yml and using this pypowerwall image:
jasonacox/pypowerwall:0.11.1t65-beta
Edit the pypowerwall.env and add this line:
PW_NEG_SOLAR=no
Then restart with ./compose-dash.sh up -d
and you should see the config changes at the stats endpoings of http://pypowerwall:8675/stats and http://pypowerwall:8675/help
OK. Forgive me as I have not dabbled in this area for over a year and having been using my custom pypowerwall.
I changed the image to your beta and added the env. Then I did the compose and it restarted all the containers. But I ended up with this error on the pypowerwall container:
11/16/2024 09:02:08 PM [proxy] [INFO] pyPowerwall [0.11.1] Proxy Server [t65] - HTTP Port 8675
11/16/2024 09:02:08 PM [proxy] [INFO] pyPowerwall Proxy Started
11/16/2024 09:02:08 PM [proxy] [ERROR] Directory '.' is not writable for cachefile. Check permissions.
11/16/2024 09:02:08 PM [proxy] [ERROR] Fatal Error: Unable to connect. Please fix config and restart.
BTW I think I mentioned this before but Tesla strips out all negative solar in the app.
Huh - that's new. Well new-ish, and also kind of not good. It wasn't doing this 2 years ago, and we needed that negative-on-solar information to identify and correct an issue in our install.
My system is a bit quirky. Its not normal Linux and I also installed it as a copy vs a clone in Git or PIP, so there is that.
There are probably a few things you are missing in your compose setup if you did a manual setup. In the docker compose file 'powerwall.yml' notice the volumes and environmental setting for AUTH:
pypowerwall:
image: jasonacox/pypowerwall:0.11.1t64
container_name: pypowerwall
hostname: pypowerwall
restart: unless-stopped
volumes:
- type: bind
source: .auth
target: /app/.auth
user: "${PWD_USER:-1000:1000}"
ports:
- "${PYPOWERWALL_PORTS:-8675:8675}"
environment:
- PW_AUTH_PATH=.auth
env_file:
- pypowerwall.env
For the 'pypowerwall.env' file (with the new addition):
PW_EMAIL=email@example.com
PW_PASSWORD=password
PW_HOST=10.0.1.2
PW_TIMEZONE=America/Los_Angeles
TZ=America/Los_Angeles
PW_DEBUG=no
PW_STYLE=grafana-dark
PW_GW_PWD=
PW_NEG_SOLAR=no
we needed that negative-on-solar information to identify and correct an issue in our install
Agree, the default will be to pass-through exact values to help identify issues. However, I've had a few others besides Chris, ask for a way to zero out negative solar values, so I'm leaving it as an optional toggle environmental setting for pypowerwall proxy.
Yes. Missing the entire volumes section in yml
On the env file I have the IP address of the server running Docker. You have PW_HOST=10.0.1.2
What is it supposed to be and how does the 10.0.1.2 come about?
That was an example, your settings should match your setup. PW_HOST is the IP address of the Powerwall Gateway. If you are using the tedapi data, it will be PW_HOST=192.168.91.1
I will probably want to use the tedapi to get back the vitals, but lets get this part working first.
Making progress. Up and running. Tried turning on lights with 0 solar output and the solar stayed at 0 while the house load increased by 500 Watts. So that seemed to work great.
Tried http://pypowerwall:8675/stats but it came back unknown. Also tried the docker servers IP address too. Is this a pseudo IP and what does it stand for again?
However, I've had a few others besides Chris, ask for a way to zero out negative solar values,
Apologies - I got that. I should have been clearer - my mild grumble was about the Tesla app itself.
Tried http://pypowerwall:8675/stats but it came back unknown.
Sorry, replace pypowerwall with the IP address of your host running the dashboard where pypowerwall is running. 😊
Nope. Is it from pypowerwall or something else in the dashboard. Only thing I messed with was this beta as I am a year old on the dashboard. Also no ted settings yet. Wip.
OK. Had a real mess here. I had so many instances running it was basically corrupted. So I had to use:
./compose-dash.sh down
Now everything is up and running. The dashboard vitals of course are missing but the beta pypowerwall is running:
{"pypowerwall": "0.11.1 Proxy t65", "mode": "Local", "gets": 63, "posts": 0, "errors": 0, "timeout": 0, "uri": {"/aggregates": 9, "/temps/pw": 9, "/pod": 9, "/soe": 9, "/alerts/pw": 9, "/freq": 9, "/strings": 9}, "ts": 1731858912, "start": 1731858864, "clear": 1731858864, "uptime": "0:00:48", "mem": 43536, "site_name": "Aesculus ", "cloudmode": false, "fleetapi": false, "tedapi": false, "pw3": false, "tedapi_mode": "off", "siteid": null, "counter": 0, "cf": ".auth/.powerwall", "config": {"PW_BIND_ADDRESS": "", "PW_PASSWORD": "*****", "PW_EMAIL": "xxx@gmail.com", "PW_HOST": "192.168.1.117", "PW_TIMEZONE": "America/Los_Angeles", "PW_DEBUG": false, "PW_CACHE_EXPIRE": 5, "PW_BROWSER_CACHE": 0, "PW_TIMEOUT": 5, "PW_POOL_MAXSIZE": 15, "PW_HTTPS": "no", "PW_PORT": 8675, "PW_STYLE": "grafana-dark.js", "PW_SITEID": null, "PW_AUTH_PATH": ".auth", "PW_AUTH_MODE": "cookie", "PW_CACHE_FILE": ".auth/.powerwall", "PW_CONTROL_SECRET": null, "PW_GW_PWD": null, "PW_NEG_SOLAR": false}}
EDIT: And I can reconfirm that the negative solar is not shown as my current production is .2kW and trying 500 Watts of lights resulted in 0kW solar. :-)
Excellent! I'll proceed with my testing and deploy the non-beta release soon.
My PW were update to 24.36.2 so I now need to update the Powerwall Dashboard.
Using the upgrade.sh command I get this error:
Note that this is a QNAP NAS so I have found quirks before with various components. I have also never used the upgrade feature before and just overwrote the files and ran setup. Perhaps I am bound to do that again.