Closed jasonacox closed 10 months ago
Is this update going to push my edge case right off the edge of the project? (When I just got back onboard days ago!).
I need to run tesla-history to pull data from my solar site which is separate from my Powerwall site. Should I deploy two completely separate Powerwall Dashboard environments, one set up for solar only and one for Powerwall?
I need to run tesla-history to pull data from my solar site which is separate from my Powerwall site. Should I deploy two completely separate Powerwall Dashboard environments, one set up for solar only and one for Powerwall?
That would work. The pyPowerwall proxy will be able to pull from either site (it is selectable) and you could technically run two instances, one for solar one for PW. But if you could wave a magic wand, what would be the idea outcome/solution? This may be a good time to see if we can accommodate.
With the current setup it's really simple to run both. I used the powerwall.extend.yml file to move tesla-history from the solar-only profile to the default profile, so it runs alongside the local powerwall setup.
I'm not 100% clear on how the new local/cloud hybrid is going to work, but the main thing I'd do with my magic wand is make it so I can (1) point the Tesla History container to my solar PPA site and (2) have the configuration persist through running the upgrade.sh script.
Hi @youzer-name - I believe I recall you have the tesla-history container also pulling data into a different DB in InfluxDB?
One important distinction here is with the pypowerwall setup, data is being retrieved by telegraf and output to InfluxDB raw database and then downsampled by CQ's. Running 2x pypowerwall would be problematic.
To have pypowerwall running for both a Powerwall site and a Solar-Only site but with data going to different databases, this would mean another separate set up for telegraf, raw database and InfluxDB CQ's I think... sounds non-trivial.
Your current set up should be able to be maintained I believe (i.e. using the tesla-history container) even after these latest changes, if you keep using the powerwall.extend.yml setup and pypowerwall exactly as you have been.
(2) have the configuration persist through running the upgrade.sh script.
Not sure if you noticed, but with the latest change I made you should be able to have you config persist now since you can put your custom InfluxDB config settings into influxdb.env, or is there something still preventing this persistence when upgrading?
Actually, while your recent change will probably end up useful later, the specific change in InfluxDB that I was trying to make became part of the base project configuration... that was to turn off internal logging out reduce InfluxDB CPU usage. So for the moment I don't need to customize InfluxDB. I'm pretty sure the database config for Solar-Only was already persisting across upgrades since everything is set up in Powerwall-Dashboard/tools/tesla-history/tesla-history.conf and that's where I have it pointed to my 'solar-only' database.
As long as there is still a way to have the Tesla-History container run on a periodic schedule, as opposed to only running on demand, I should be ok.
I toggled this PR to ready to start the work, but there is more to do. @mcbirse would love your thoughts on the best way to treat the "cloud mode" going forward. And please, free to commit to the v4.0.0 branch as well.
So far, I updated the setup.sh
and upgrade.sh
to create the cloud mode pypowerwall files (.pypowerwall.auth
and pypowerwall.site
) for the updated volume binding in Docker compose. I staged version v4.0.0 with pypowerwall v0.7.1. More to do...
The new pypowerwall proxy (t35) version will give users the ability to get live data, including the power flow animation, for systems that don't have a local API . This means the pypowerwall container (jasonacox/pypowerwall:0.7.1t35
) will support traditional Powerwall owners as well as Solar-only and new Powerwall 3 owners (as mentioned in #387 ). Anyone wanting to manually add/keep tesla-history can do so in the powerwall-extend.yml
as @youzer-name mentions. And of course, the standalone tesla-history tool can still be used to fill in historic data as well as any gaps that form over time.
But, I'm wondering about the best way to handle the upgrade.sh
migration for other existing users. Some initial thoughts:
I'm leaning toward 2 as it seems like the best long term solution, even if a bit more upfront work to make it happen. What am I missing?
I toggled this PR to ready to start the work, but there is more to do. @mcbirse would love your thoughts on the best way to treat the "cloud mode" going forward. And please, free to commit to the v4.0.0 branch as well.
No problem - I will have a look at what changes we need to make.
So far, I updated the
setup.sh
andupgrade.sh
to create the cloud mode pypowerwall files (.pypowerwall.auth
andpypowerwall.site
) for the updated volume binding in Docker compose. I staged version v4.0.0 with pypowerwall v0.7.1. More to do...
I actually had a thought about this one yesterday already.... I was thinking, it may be better to add a new folder in the repository called e.g. .auth
(and add a .keep
file inside, like the Grafana folder has), then the Docker Compose volume binding should be to the .auth
folder (instead of the individual files). Cloning the repository will include the folder so the container will always start, even if the .pypowerwall.auth/.site files within are missing. No requirement to create the files in setup/upgrade scripts, and they will be created by pypowerwall during setup.
But, I'm wondering about the best way to handle the
upgrade.sh
migration for other existing users.
My preference would be for option 2.
I think for the solar-only users we should be able to provide some instructions for using the tesla-history script as a container, e.g. explaining and providing an example of what needs to be added to a powerwall.extend.yml
for instance.
I was thinking, it may be better to add a new folder in the repository called e.g. .auth
I agree. I'll update pypowerwall proxy to support passing an PW_AUTH_PATH variable.
I agree. I'll update pypowerwall proxy to support passing an PW_AUTH_PATH variable.
Ahh, that's not what I meant, sorry. I don't think any change to pypowerwall is required. Just that the volume bind would be to a folder instead of the files.
Let me test this and I'll confirm what I intended works.
Ha! Ok, let me know if you can figure it out. That would be great. I staged a new v0.7.2 branch just in case: https://github.com/jasonacox/pypowerwall/commit/743591be44dcf9981aa61ba10f7ce074cd3e2072
I'll update pypowerwall proxy to support passing an PW_AUTH_PATH variable.
Sorry, yes you are correct, we would also need a variable added to pypowerwall to set the path to where the auth files are so it points to the target folder of the compose config. Overall I think this will be a better setup in the long run.
I updated to pypowerwall (jasonacox/pypowerwall:0.7.2t36) to now allow an environmental setting for the location of the .pypowerwall.auth and .pypowerwall.site files. It is set with PW_AUTH_PATH
.
I think steps are:
.auth
folder with a .keep
file. ✅ PW_AUTH_PATH=.auth
to the pypowerwall.env.sample ✅ pypowerwall.env
to see if the PW_AUTH_PATH exists, and if not, add it to be PW_AUTH_PATH=.auth
(TODO)I think for the solar-only users we should be able to provide some instructions for using the tesla-history script as a container, e.g. explaining and providing an example of what needs to be added to a powerwall.extend.yml for instance.
I'm probably missing something obvious. I assumed that Tesla solar-only customers can still use the Tesla app to see current production, home and grid usage which would mean that we should be able to use the same Tesla Owners API to get that data. I followed your Tesla-history script to grab both battery as well as solar lists in pypowerwall:
sitelist = self.tesla.battery_list() + self.tesla.solar_list()
Should all the solar-only data also show up in api/1/energy_sites/{site_id}/live_status ? If so, if we solve for the "cloud mode" in setup.sh and upgrade.sh, solar-only customers should be able to start using the same dashboard and setup as Powerwall owners. I think?
I'm probably missing something obvious.
Sorry, no - I think I've had too many late nights and I did not explain properly! What I meant was for the edge cases only - like @youzer-name
Should all the solar-only data also show up in api/1/energy_sites/{site_id}/live_status ? If so, if we solve for the "cloud mode" in setup.sh and upgrade.sh, solar-only customers should be able to start using the same dashboard and setup as Powerwall owners. I think?
Yes, I would say the majority of solar-only users can use the new cloud mode of pypowerwall.
- For upgrade, we will need to scan the local
pypowerwall.env
to see if the PW_AUTH_PATH exists, and if not, add it to bePW_AUTH_PATH=.auth
(TODO)
Working on this - I have a slightly different approach which may be more sensible. I will commit changes but feel free to choose your preference though.
Thanks @mcbirse! Can't wait to see your approach! Please continue.
PS - Happy New Year!! 🥳
Thanks! Happy New Year! (in advance for your tomorrow) 🥳
I made changes to pypowerwall & proxy as the cloud setup mode was not working when running as a container. i.e. docker exec -it pypowerwall python -m pypowerwall setup
which will be needed during the setup script.
Have submitted a PR for that. I had also noticed a delay in the container stopping/restarting which the change to the signal handler should fix (untested).
For the PW_AUTH_PATH environment, I have committed changes around what I was thinking may be better.
i.e. the environment variable is defined in the Docker Compose config only - not pypowerwall.env. It seems clearer, as it is really only relevant to the compose config and then easily identifiable in the file along with the volume bind mount source/targets which it relates to.
Also, if it was defined in pypowerwall.env file, a user looking at that may think they could change the path defined here to point their auth files at a different folder.... but that won't work and will confuse people, because that path is for within the container environment, not the local filesystem.
The paths could be changed/overridden by a user redefining those attributes in the powerwall.extend.yml file instead.
My thoughts/rant on this. Happy for changes if you see a better way or prefer the previous setup. I also haven't tested all this yet since it needs updates for pypowerwall & proxy to be pushed.
Taking a break for NYE now. 😄
Per the changes from https://github.com/jasonacox/pypowerwall/pull/62 I have tested the cloud mode setup, and running docker exec -it pypowerwall python -m pypowerwall setup
works... Now to incorporate that into the setup scripts.
I've made a start on this (with the intention to unwind the use of Docker Compose profiles for better compatibility).
More work to do yet, but doable... Hardest part will probably be ensuring the upgrade process will work for all of the various installation iterations out there.
Thanks @mcbirse ! Please feel free to make / commit any changes when you have time.
I'm going to work on adding the CQs for the new data we are gathering with pypowerwall v0.7.x, specifically the aggregate Powerwall capacity data. This will give those who don't have vitals (or lost vitals) that important data in the dashboard. It was a top data point the community wanted to keep.
We also added time_remaining_hours
data. I am playing with that data in the dashboard either as a meter value or possibly as a component of the Powerflow Animation.
I'm not sure yet what would look best or if there is any interest in that data on the dashboard, but it will be available.
@jasonacox Are you calculating the battery time available in realtime based on the current values? If so, I'm not sure how useful a metric that will really be. An hour before the sun goes down it will produce an overly optimistic number. An hour before sunrise, an overly pessimistic number.
I have two panels in my 'experimental' dashboard where I played around with this. One of them is based on the last 24 hours... ( battery capacity + last 24 hours solar production) / (last 24 hours home use) = hours remaining. I actually am using the last 24 hours of home use not including car charging, as I wouldn't do any car charging if the power was out.
I have another panel that is based on the last hour of home use and doesn't account for car charging, and since one of the cars is drawing ~8 kW right now, it says I'd only have 3 hours of battery time!
Thanks @youzer-name ! That helps confirm that it isn't valuable. I'm computing it but also able to get that number from Tesla Cloud (the same number will show up in the Tesla app) and it is equally useful (or useless 🤷 ). 😁
I'll focus on the Powerwall Capacity aggregate for this release. I know that will be useful for Powerwall 3 owners that don't have access to any local data that includes the separate Powerwall capacity metrics.
For anyone needing (or wanting) to test cloud mode:
You can manually activate the cloud mode in your Powerwall-Dashboard setup by doing this:
Powerwall-Dashboard
directory run: mkdir .auth
powerwalls.yml
file and replace the pypowerwall section with this:
pypowerwall:
image: jasonacox/pypowerwall:0.7.6t39
container_name: pypowerwall
hostname: pypowerwall
restart: unless-stopped
volumes:
- type: bind
source: .auth
target: /app/.auth
user: "${PWD_USER:-1000:1000}"
ports:
- "${PYPOWERWALL_PORTS:-8675:8675}"
environment:
- PW_AUTH_PATH=.auth
env_file:
- pypowerwall.env
profiles:
- default
pypowerwall.env
file and remove the IP address of the Powerwall Gateway (PW_HOST=
)./compose-dash.sh up -d
and it will download the latest pypowerwall and switch to cloud mode. docker exec -it pypowerwall python -m pypowerwall setup
docker restart pypowerwall
@jasonacox - I'll have some further updates soon (weekend perhaps). Managed to find a little time last night to work on the setup script some more.
However, this resulted in likely another update to pypowerwall I think, and weather411 (small bug fix), which will need push to PyPI / Docker Hub etc. again.
I'd much rather be spending my time on this project than my real job! 😄 C'est la vie.
@mcbirse No worries and no rush! And, I'm right there with you! Been crazy at work too... welcome to 2024. 😁
Thanks for all your help! 🙏
Latest commits are work in progress, and require more testing.... new container versions of pypowerwall and weather411 will need to be built and pushed for the Docker Compose changes before being able test.
TODO: Upgrade script, verify script, more testing... and anything else overlooked?
I accepted the pypowerwall change and pushed the new v0.7.5 version to PyPI and proxy to dockerhub.
- Before it will work, you need to set up the Tesla auth:
docker exec -it pypowerwall python -m pypowerwall setup
- Restart:
docker restart pypowerwall
Hi Jason
Just following along to test the Cloud version and I am at step 6. When I issue that command the response is -
billr@billrs-Mac-mini Powerwall-Dashboard % docker exec -it pypowerwall python -m pypowerwall setup pyPowerwall [0.6.4]
Usage:
python -m pypowerwall [command] [<timeout>] [-nocolor] [-h]
command = scan Scan local network for Powerwall gateway.
timeout Seconds to wait per host [Default=1.0]
-nocolor Disable color text output.
-h Show usage.
and nothing else happens.
Hi @billraff - I see the problem:
pyPowerwall [0.6.4]
You need to upgrade to at least version 0.7.3 which is what step 3 should have done. Did you make that edit? Your install may be using the old version (check powerwall-v1.yml
). In any case, there is a newer version with some other updates now too, change those referenced yml files to jasonacox/pypowerwall:0.7.6t39
- you can also manually remove the old images:
#
# stop and remove old one
docker stop pypowerwall
docker rm pypowerwall
# remove all old images
docker images | grep pypowerwall | awk '{print $3}' | xargs docker rmi -f
# start the stack
./compose-dash.sh up -d
It looks good and we are getting close @mcbirse. What left on your list? I'll start testing.
@jasonacox - I think we are at "almost ready for release" stage, pending some further testing and updates to documentation / instructions / release notes, etc.
I have tested the following successfully (both Linux and Windows, but not Mac):
setup.sh
after cloning repository (both Local Access mode and Tesla Cloud)setup.sh
verify.sh
script in various states from good to things being brokenFor anyone else that would like to test and provide feedback, the manual instructions here https://github.com/jasonacox/Powerwall-Dashboard/pull/414#issuecomment-1882355993 can be followed for an existing install.
Or, to test a clean install:
#Clone v4.0.0 branch
git clone -b v4.0.0 https://github.com/jasonacox/Powerwall-Dashboard.git
#Install Powerwall-Dashboard v4.0.0
./setup.sh
git pull; git checkout v4.0.0; cp upgrade.sh tmp.sh; bash tmp.sh upgrade
which ran clean (./verify.sh
):IP Address (recommended or leave blank to scan network):
or
IP Address (leave blank to scan if unknown):
cp influxdb.env.sample influxdb.env
then it ran correctly.Followed upgrade procedure and reran setup.
This script will attempt to verify all the services needed to run Powerwall-Dashboard. Use this output when you open an issue for help: https://github.com/jasonacox/Powerwall-Dashboard/issues/new
All tests succeeded.
Animation working and im seeing data in the alerts field. Granted only a single alert "System Connected to Grid"
Animation working and im seeing data in the alerts field. Granted only a single alert "System Connected to Grid"
Thanks for confirming the animation works! 🚀
As to the alert, yes, without the vitals payload, we are limited. I plan to spend some time to figure out if we can infer other alerts based on various signals in the payloads. We might be able to add some new ones that would be helpful.
- The "scan for IP" feature is nice but slow. I wonder if we should somehow hint that entering the IP if you know it is the recommended path.
Implemented multi-threaded network scanning in latest pypowerwall to scan multiple hosts simultaneously... this should increase the network scan speed significantly.
I'm running through the tests again, but this is looking ready to merge.
✅ WinOS - note below ✅ MacOS ✅ Raspberry Pi ✅ Ubuntu Linux
During windows test, when using the WSL2 shell, the uname -s
return Linux and the IP address it picks for scanning is the 172.x.x.x variety instead of the LAN. I can't find an easy fix for that and don't know that it is a serious issue as the user still has the opportunity to enter their LAN network (e.g. 10.0.1.0/24) and the scan works.
I can't recall the status of this, but should dashboard-solar-only.json
be updated to include the Powerflow meter again?
Great idea! I added it back... I should probably look at a way to modify the display to remove the Powerwall icon, but that can be a future edit (or someone else can help figure that out :).
I'm going to do a quick pass through the documentation and then merge. 🚀
Thanks to everyone who tested for us! If you switched to the v4.0.0 branch, you will want to switch back to main and upgrade:
git checkout main
./upgrade.sh
Thanks for all the work on this @mcbirse ! 🙏
Nice work @mcbirse and @jasonacox. I just upgraded and it nicely handled my extend.yml for my ecowitt weather service without a hiccup.
I've just got my control service using the fleet API for setting modes (self-consumption, autonomous for fast charging, backup for slow charging) and backup percentages, and it seems to be working just as well as the owner API used to work.
Now that I've got it working on the fleet API, I think a re-write from VB.Net to Python is next cab off the rank (which means I could submit a massive PR for control to this repo), but that is a long way down the track, I fear.
Thanks @BJReplay !
On the FleetAPI, I had started on the python version here: https://github.com/jasonacox/pypowerwall/tree/main/tools/fleetapi - My TODO is to add this into pypowerwall's cloud class to allow user to pick FleetAPI (if they sign up for it) or the Tesla Owners API from TeslaPy. We could also set it up so that it could be used for other tools/projects (just from pypowerwall import fleetapi
or something) which would expose easy to use functions for making FleetAPI calls.
On the FleetAPI, I had started on the python version here:
I wish I'd found your version before I muddled my way through (using Postman). I got to POST /api/1/partner_accounts pretty quickly, but every call I made got 412 Precondition Failed with body
{
"error":"Account must be registered in the current region https://fleet-api.prd.na.vn.cloud.tesla.com/, please see https://developer.tesla.com/docs/fleet-api#register"
}
which, of course, was exactly what I was trying to do - register :)
I gave up for a couple of days, came back, and tried again, and it worked first time, so I put it down to guessing that maybe their service wasn't working the first time I tried, or it takes some time between creating your public key and being able to register, but from your readme, it looks like once you've registered, and generated your key, it should be pretty much automatic?
My project now has a flag UseFleetAPI, and then switches between using OwnerAPI and its endpoint, or FleetAPI and its end point for auth and calls.
The downside is that you used to be able to do headless login to the OwnerAPI, whereas now you can't with the Fleet API - which isn't great for a service - hence the plan to rewrite.
it looks like once you've registered, and generated your key, it should be pretty much automatic?
That's correct. Once registered you use the Client ID and the one-time-code to generate your auth token and refresh token. The sticky bits are that it requires you to have your own website (domain with a PEM file stored on it for registration) and sign up with a business name which may not work for everyone. Also, there are rate limits which don't seem like a problem for those of us using for a hobby, but could be an issue if you are signing up others under your Client ID. Good news is that it is free for now and it is an official API from Tesla vs the unofficial version we have been using. 😁
This update will use the cloud mode support built in to pyPowerwall v0.7.0 to pull live system metrics from the Tesla Owners API.
Related
Updates
setup.sh
andupgrade.sh
to support transition to pyPowerwall for cloud mode.verify.sh
to support cloud mode metrics.