jasonacox / Powerwall-Dashboard

Grafana Monitoring Dashboard for Tesla Solar and Powerwall Systems
MIT License
300 stars 64 forks source link

Tesla Solar-Only Dashboard #183

Open jasonacox opened 1 year ago

jasonacox commented 1 year ago

There has been a lot of interest by Tesla Solar owners who don't have a Powerwall, to get a similar dashboard for their system. I'm opening this as a feature request to explore creating a derivative dashboard using a similar stack plus the tesla-history script developed by @mcbirse as a potential service to feed a solar-only dashboard.

Reference: Reddit Thread

Proposal

Add a Tesla-Solar-Only dashboard option to in the tools folder. Basically, a InfluxDB+Grafana+Import-Tool (optional +Weather411) stack. The Import-Tool could be converted to be a python service, for instance, that polls the Tesla API every 5m or so and stores it in InfluxDB. We could use the dashboard.json and remove the Powerwall specific panels.

We need someone who has a Solar-Only setup to validate.

jasonacox commented 1 year ago

Hi @Jot18 , This mean you don't have python installed. I suspect your installation has python instead of python3 or it may not have it at all. Do you get any response when you try python -V? Are you using Windows OS?

@mcbirse we could add a conditional to check that python is installed and error out if it isn't available, instructing the user to install python. Or would it make more sense to have a docker run command that uses the container to set the config values/tokens?

Jot18 commented 1 year ago

Python --version returned Python 3.11.3

Yes I'm using windows 10 64 bit. image

mcbirse commented 1 year ago

Or would it make more sense to have a docker run command that uses the container to set the config values/tokens?

@jasonacox - I was actually thinking this originally when changing the setup script as an option to reduce requirements of the local environment, but didn't get back to it. I'll have a look/test this tonight when I am home and push an update to setup.sh for testing, if it works okay for me.

@Jot18 - Thanks for testing - it's these sort of issue reports that enable us to make improvements and refine the system for the multitude of platforms. Hang tight, I should be able to make an update to fix this for your setup.

mcbirse commented 1 year ago

Hi @Jot18 - I have made a change to the setup script so it should no longer rely on the local environment python/modules etc.

It will run the tesla-history setup process via the docker container instead.

I've merged the change so you can test by pulling latest update from Powerwall-Dashboard:

# Pull latest changes
git pull

Or if doing a fresh install:

# Download 
git clone https://github.com/jasonacox/Powerwall-Dashboard.git

Then run setup:

# Select Solar Only
cd Powerwall-Dashboard/tools/solar-only

# Run setup
./setup.sh

Let us know how it goes! 🤞

Jot18 commented 1 year ago

Hi @Jot18 - I have made a change to the setup script so it should no longer rely on the local environment python/modules etc.

It will run the tesla-history setup process via the docker container instead.

I've merged the change so you can test by pulling latest update from Powerwall-Dashboard:

# Pull latest changes
git pull

Or if doing a fresh install:

# Download 
git clone https://github.com/jasonacox/Powerwall-Dashboard.git

Then run setup:

# Select Solar Only
cd Powerwall-Dashboard/tools/solar-only

# Run setup
./setup.sh

Let us know how it goes! 🤞

@mcbirse It's working now. However not sure if its intentional or tyo.

Side note: Is it possible to export influxdb data automatically? I been exporting .csv from grafana but would like to automate it somehow. I have another powerBI running for cost analysis and TOU consumption vs production. Any help is greatly appreciated.

Thanks.

youzer-name commented 1 year ago

@mcbirse - You've saved me from needing to roll my own solution to get data from the SolarEdge API! Running this daemon should allow me to get the data I need to (approximately) calculate the costs/savings from my two solar arrays. I have a few questions about setting it up in my environment.

Short recap: I have a Solar City/Tesla array that I pay for under a PPA and two Powerwall 2's. I just added a second solar array (purchased) that will have a SolarEdge inverter once the installers come back with a working unit to replace the one that died as soon as they turned it on.

The Powerwall Gateway will see the total production or both arrays, so I need some way to get either the SolarEdge or the Tesla production numbers separate from the PW Gateway so I can do the math to figure out how much solar was generated by each array. My Powerwall Dashboard setup has been customized a lot, so I'm not using the setup.sh script or git pull. I download and integrate any updates that I want manually.

Scrolling through the topic, it looks like if I just change the database name in the config file from 'powerwall' to 'solar' (or something similar), the daemon will be able to write the data to InfluxDB without any conflict with the Powerwall Dashboard data. Do I have that right?

So could I:

If I'm following along correctly, that will create a container in the Powerwall Dashboard stack that will pull the solar-only data and write it to whatever database I specified in the tesla-history.conf. Any other changes I'd need to make?

** footnote: Since the Powerwall Gateway always shows higher production numbers than my Solar City inverter, the best I can get is approximate numbers. When looking at monthly totals, the Powerwall Gateway shows about 4% more solar production than I'm being billed for via the Solar City inverter data. I assume the difference is down to one being a revenue-grade meter and the other not.

gpieroni1 commented 1 year ago

Upgrade went off without a hitch for me and the tesla-history container is working great. Thank you again for this!

youzer-name commented 1 year ago

So could I:

  • Add the tesla-history section to my powerwall.yml file.

    • Modify the source path for the volume to point to a folder where I have tesla-history.conf (with the db name changed) and a working tesla-history.auth (created when I ran the script manually)
    • Do a docker-compose down and up to create the new container.

If I'm following along correctly, that will create a container in the Powerwall Dashboard stack that will pull the solar-only data and write it to whatever database I specified in the tesla-history.conf. Any other changes I'd need to make?

@mcbirse - So I gave it a go and it looks like I was able to get it running the way I wanted. I did encounter one issue related the the DB HOST configuration that may be a bug.

I created a 'solaronly' database in InfluxDB and I copied my .conf and .auth files to ./tesla-history. I updated the .conf to add my site ID and change the database name to 'solaronly'. I added the tesla-history section to my powerwall.yml and did a down/up of the stack.

Initially I got an error on the database connection. When I was running this before (not in the docker container) I was able to use "192.168.0.12" for my HOST and 8085 as my port for InfluxDB. I used to have two instances of InfluxDB running, so this one is named "pw-influxdb" and exposes port 8085 while using port 8086 internally. From outside the container it is at 192.168.0.12:8085 and form inside the stack it is at pw-influxdb:8086.

When I first tried to run this using the IP and external port I got this error:

ERROR: Failed to write to InfluxDB: HTTPConnectionPool(host='influxdb', port=8085)

I'm not sure where it was getting 'influxdb' as the DB host. The conf file said:

HOST = 192.168.0.12
PORT = 8085

It picked up the port number, but I'm not sure why it ignored the IP address. So I switched it to use the internal reference to the database:

HOST = pw-influxdb
PORT = 8086

and that cleared up the connection error. Is that a bug in the way the conf file is read? I have data sources in Grafana set up both ways and it is able to connect to the database whether using the internal or external address and port.

The final thing I had to do to set this up manually was to create the retention policies. I went ahead and duplicated all the retention policies from the main database, but I expect that several of those (pod, alerts, etc) weren't needed. I'm seeing data flowing into the solaronly database, in the http measurement, and in the autogen, kwh, daily, and monthly retention policies.

mcbirse commented 1 year ago

@youzer-name & others - sorry, have been busy this week and have not had time to get back to several posts here.

Just quickly - check your powerwall.yml config, as the InfluxDB hostname can be set here via an environment variable for when the tesla-history is running in a docker container, or daemon mode (IHOST env variable).

I did this on purpose, as the hostname when running in a docker container could be different to when trying to run the tesla-history script outside the container, for instance to extract history data.

So if the powerwall.yml config defines the InfluxDB IHOST env variable, this will override the hostname in the tesla-history config file when running in daemon mode. I hope that makes sense!

youzer-name commented 1 year ago

@mcbirse - I missed that host reference in the powerwall.yml and I just updated it to be 'pw-influxdb' to match my environment. Does it make sense that it used the IHOST from powerwall.yml when I entered an IP address in tesla-history.conf, but it used the HOST from tesla-history.conf when I changed that line of the conf file to use the container name?

mcbirse commented 1 year ago

Does it make sense that it used the IHOST from powerwall.yml when I entered an IP address in tesla-history.conf, but it used the HOST from tesla-history.conf when I changed that line of the conf file to use the container name?

@youzer-name - I'll try to explain the configuration scenarios and what I intended with this, and hopefully it makes more sense. If it is not working per below there could be bug, as I have not tested all scenarios.

The tesla-history script was originally intended to be run from the command only to pull in historical data (so, not as a daemon or in a docker container).

Adding a daemon mode option and ability to run in a docker container complicates the set up a little bit. I wanted it to have the flexibility to be able to run in a docker container, or still be run outside the container to pull in historical data as well, or even in daemon mode manually from the command line or as some other system service.... however, still use the same common/shared config for all cases.

When running in a docker container, the InfluxDB hostname and port could be different to if you were to run the script from outside the container (even though you are writing to the same InfluxDB instance).

So, I added an option to be able to define the InfluxDB hostname via an environment variable for daemon mode, i.e. for when running in a docker container.

If the environment variable is defined, this will override the hostname defined in the tesla-history.conf file. This also means however that the hostname defined in the tesla-history.conf file can be set to the host/ip required for running the script from outside the container (e.g. to pull history data) which gives some flexibility.

Example per your setup:

I used to have two instances of InfluxDB running, so this one is named "pw-influxdb" and exposes port 8085 while using port 8086 internally. From outside the container it is at 192.168.0.12:8085 and form inside the stack it is at pw-influxdb:8086.

NOTE: I've realised from your setup I should allow the PORT to be defined by environment variable in the docker compose configuration as well. I will update the script to support this.

For your example setup, you might have a docker compose config like below.

The InfluxDB internal hostname is called "pw-influxdb" with internal target port of 8086, and the externally published IP/port is 192.168.0.12:8085.

Given that, in the same file it is easy to know/see what should be defined for the tesla-history docker container by looking at your pw-influxdb container configuration. So for the tesla-history container to connect to the internal InfluxDB instance, you would define the environment variables as "IHOST=pw-influxdb" and "IPORT=8086"

services:
    pw-influxdb:
        image: influxdb:1.8
        container_name: pw-influxdb
        hostname: pw-influxdb
        restart: always
        volumes:
            - type: bind
              source: ./influxdb.conf
              target: /etc/influxdb/influxdb.conf
              read_only: true
            - type: bind
              source: ./influxdb
              target: /var/lib/influxdb
        ports:
            - target: 8086
              published: 8085
              mode: host

    tesla-history:
        image: jasonacox/tesla-history:0.1.0
        container_name: tesla-history
        hostname: tesla-history
        restart: always
        volumes:
            - type: bind
              source: ./tesla-history
              target: /var/lib/tesla-history
        environment:
            - IHOST=pw-influxdb
            - IPORT=8086
            - TCONF=/var/lib/tesla-history/tesla-history.conf
            - TAUTH=/var/lib/tesla-history/tesla-history.auth
        depends_on:
            - pw-influxdb

Then, in the tesla-history.conf file, you can configure the HOST/PORT to be the external hostname/port that is used outside the container, i.e. "HOST = 192.168.0.12" and "PORT = 8085"

[Tesla]
# Tesla Account e-mail address and Auth token file
USER = yourname@example.com
AUTH = tesla-history.auth

[InfluxDB]
# InfluxDB server settings
HOST = 192.168.0.12
PORT = 8085
# Auth (leave blank if not used)
USER =
PASS =
# Database name and timezone
DB = solaronly
TZ = America/New_York

[daemon]
; Config options when running as a daemon (i.e. docker container)
# Minutes to wait between poll requests
WAIT = 5
# Minutes of history to retrieve for each poll request
HIST = 60
# Enable log output for each poll request
LOG = no
# Enable debug output (print raw responses from Tesla cloud)
DEBUG = no
# Enable test mode (disable writing to InfluxDB)
TEST = no
# If multiple Tesla Energy sites exist, uncomment below and enter Site ID
SITE = 123456789

This means you can still run the tesla-history script at any time from outside the container and pull history data without having to have a different config file, and the tesla-history docker container still runs fine as the hostname/port is defined by the environment variables in the docker compose configuration.

I hope that makes sense or if you see any issues with this or something is not working, please let me know.

I will update the script to also allow the PORT to be defined by environment variable as well, which would be required for your setup.

mcbirse commented 1 year ago

@youzer-name - In the latest update defining the PORT is now supported.

Please note the variable names used for the tesla-history docker compose config have been changed to be more descriptive, i.e.

        environment:
            - INFLUX_HOST=influxdb
            - INFLUX_PORT=8086
            - TESLA_CONF=/var/lib/tesla-history/tesla-history.conf
            - TESLA_AUTH=/var/lib/tesla-history/tesla-history.auth
youzer-name commented 1 year ago

@mcbirse That all makes sense, and I'm glad my setup was weird enough to shed light on the potential need for the port config option. 😄

On the off chance that anyone is interested in what the data from the solar-only history looks like compared to the data from a Powerwall Gateway, this is what I'm seeing:

image

"Solar City' is coming from the history API and "Solar Energy" is coming from the Gateway. As expected, the history data smooths out the peaks and valleys due to the lower sampling rate. This shows all of yesterday, but during the day the history data also lags slightly at the right edge of the graph due to the 5 minute update interval.

As of (hopefully) Tuesday, when I get the inverter replaced on my new array, the line for "Solar Energy" will be quite a bit higher as it will be the total solar output seen by the Powerwall Gateway from both arrays.

It looks like the daily totals from the tesla-history api exactly match what I get in the Tesla app, and those numbers match what I get billed for under my PPA, so I'm going to adjust my cost calculations to use the solar-only history generation numbers. That should make them dead-on accurate going forward, whereas in the past they had always been a bit off due to the higher generation numbers coming from the Gateway.

Thanks again for all the time and effort you (and everyone else) has put into these tools.

jasonacox commented 1 year ago

Does anyone have an example "solar-only" dashboard screenshot you would be willing for us to post on the https://github.com/jasonacox/Powerwall-Dashboard/tree/main/tools/solar-only#tesla-solar-only page as an example?

Jot18 commented 1 year ago

Screenshot (2)

jasonacox commented 1 year ago

Thanks @Jot18 ! It looks great!

apU823 commented 1 year ago

Can you share your dashboard file?

On Sun, Jul 9, 2023 at 7:44 PM Jot18 @.***> wrote:

[image: Screenshot (2)] https://user-images.githubusercontent.com/20891340/252177682-3f954359-e851-462e-ba20-e1ad90db5bd7.png

— Reply to this email directly, view it on GitHub https://github.com/jasonacox/Powerwall-Dashboard/issues/183#issuecomment-1627863813, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEV7MJ3FUK332DGQB24MSO3XPM64XANCNFSM6AAAAAAVBFL554 . You are receiving this because you were mentioned.Message ID: @.***>

Jot18 commented 1 year ago

@apU823 I think it's this one. Just change .txt to .json. Can't upload json files here.

Solar-Only.txt

jasonacox commented 1 year ago

The dashboard should also be in the solar-only folder: https://github.com/jasonacox/Powerwall-Dashboard/blob/main/tools/solar-only/dashboard.json

jared-w-smith commented 1 year ago

I have solar only and a new Tesla inverter that seems to have a different management system than older inverters. When I go to the inverter IP address on my local network, it quickly redirects from the login page to an /upgrade page that prompts to install the Tesla Pros app. Using the Tesla Pros app I am readily able to log in as a customer, connect to the TEG network, and view/set all inverter data and settings (usage/generation, MPPT string data, CT settings, wifi settings, etc.).

I'm hoping to be able to access this same data directly from the wifi network. I'm able to stop the web redirect and get the login page. And I'm similarly able to POST using curl to /api/login/Basic, but in both cases I get "bad credentials" errors. I believe the issue is with the password not being correct, but I've no idea what this might be - my Tesla account password, the password I used when I set up the gateway, and the last 5 of the inverter serial do not work. I think that if I can figure out authentication (as is readily done in Tesla Pros), then we'd be able to add back a lot of the inverter control and data to the dashboard for solar-only customers.

Any ideas?

apU823 commented 1 year ago

Do you have a model number for the new inverter?

I wonder if they simply forgot to turn off Tesla Pro access?

On Fri, Jul 21, 2023 at 9:46 PM jared-w-smith @.***> wrote:

I have solar only and a new Tesla inverter that seems to have a different management system than older inverters. When I go to the inverter IP address on my local network, it quickly redirects from the login page to an /upgrade page that prompts to install the Tesla Pros app. Using the Tesla Pros app I am readily able to log in as a customer, connect to the TEG network, and view/set all inverter data and settings (usage/generation, MPPT string data, CT settings, wifi settings, etc.).

I'm hoping to be able to access this same data directly from the wifi network. I'm able to stop the web redirect and get the login page. And I'm similarly able to POST using curl to /api/login/Basic, but in both cases I get "bad credentials" errors. I believe the issue is with the password not being correct, but I've no idea what this might be - my Tesla account password, the password I used when I set up the gateway, and the last 5 of the inverter serial do not work. I think that if I can figure out authentication (as is readily done in Tesla Pros), then we'd be able to add back a lot of the inverter control and data to the dashboard for solar-only customers.

Any ideas?

— Reply to this email directly, view it on GitHub https://github.com/jasonacox/Powerwall-Dashboard/issues/183#issuecomment-1646374495, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEV7MJZJJKWQXUROPIDJHSDXRMWJBANCNFSM6AAAAAAVBFL554 . You are receiving this because you were mentioned.Message ID: @.***>

jared-w-smith commented 1 year ago

I made some progress. It took some experimenting, but I found the default password is the last 5 values of the TEG WIFI password (not the serial number). So, I was then able to set my "customer" password at /password. I can now see the web-based monitoring system in my browser.

Now that I have my customer password set, I'm able view my access token in DevTools. I can also generate a token by authenticating using: curl -k -i -X POST https://192.168.0.5/api/login/Basic -H "Content-Type: application/json" -d "{\"username\": \"customer\",\"password\": \"PASSWORD\"}"

This returns a token value (such as): {"email":"","firstname":"Tesla","lastname":"Energy","roles":["Home_Owner"],"token":"avUuKAQsnRiaOVygfqDaWjRCqUEUDJ7NDHu-Cfl9eE-ld-tQILYtjvt0T9C-AdGMNkAEKaYAgo0ALivFINoIhQ==","provider":"Basic","loginTime":"2023-07-21T21:43:06.687730825-06:00"}

With the returned token I can access most of the Powerwall API endpoints. For example: curl -k -i --header "Authorization: Bearer avUuKAQsnRiaOVygfqDaWjRCqUEUDJ7NDHu-Cfl9eE-ld-tQILYtjvt0T9C-AdGMNkAEKaYAgo0ALivFINoIhQ==" https://192.168.0.5/api/meters/aggregates returns the real-time site, load, and solar generation values.

To answer your questions, the hardware shows model 1535843-00-D. The /api/solars endpoint shows model of "PVI-45".

apU823 commented 1 year ago

Just curious can you share some screenshots of what the web gui looks like? With the real time data

Might make for an interesting post over on r/teslasolar

On Fri, Jul 21, 2023 at 11:52 PM jared-w-smith @.***> wrote:

I made some progress. It took some experimenting, but I found the default password is the last 5 values of the TEG WIFI password (not the serial number). So, I was then able to set my "customer" password at /password. I can now see the web-based monitoring system in my browser.

Now that I have my customer password set, I'm able view my access token in DevTools. I can also generate a token by authenticating using: curl -k -i -X POST https://192.168.0.5/api/login/Basic -H "Content-Type: application/json" -d "{\"username\": \"customer\",\"password\": \"PASSWORD\"}"

This returns a token value (such as):

{"email":"","firstname":"Tesla","lastname":"Energy","roles":["Home_Owner"],"token":"avUuKAQsnRiaOVygfqDaWjRCqUEUDJ7NDHu-Cfl9eE-ld-tQILYtjvt0T9C-AdGMNkAEKaYAgo0ALivFINoIhQ==","provider":"Basic","loginTime":"2023-07-21T21:43:06.687730825-06:00"}

With the returned token I can access most of the Powerwall API endpoints https://github.com/vloschiavo/powerwall2. For example: curl -k -i --header "Authorization: Bearer avUuKAQsnRiaOVygfqDaWjRCqUEUDJ7NDHu-Cfl9eE-ld-tQILYtjvt0T9C-AdGMNkAEKaYAgo0ALivFINoIhQ==" https://192.168.0.5/api/meters/aggregates returns the real-time site, load, and solar generation values.

To answer your questions, the hardware shows model 1535843-00-D. The /api/solars endpoint shows model of "PVI-45".

— Reply to this email directly, view it on GitHub https://github.com/jasonacox/Powerwall-Dashboard/issues/183#issuecomment-1646446589, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEV7MJZPNGEW77LKBNJBURDXRNFBRANCNFSM6AAAAAAVBFL554 . You are receiving this because you were mentioned.Message ID: @.***>

jared-w-smith commented 1 year ago

Here's a screenshot of the web interface.

TeslaWebInterface

It's a bit tricky to get to the log-in page because of the redirect to /upgrade. I use DevTools to temporarily enable network throttling, then hit Esc when the login form appears, but before it redirects. I can then log in successfully. The web interface is pretty minimal - and isn't needed at all for programmatic access to the APIs (except perhaps to initially set the "customer" password).

I think with the solar-only inverter providing most of the local APIs that additional dashboard functionality may be possible. I'm especially interested in string data, though it appears that /devices/vitals does not output this data. I am able to see this data in the Tesla Pros app, so it must be available somehow.

hulkster commented 1 year ago

GREAT thread as I have exactly this situation as I am also solar-only ... my inverter is 1538000-45 and installed software is 23.12.3

I can confirm what @jared-w-smith wrote above. LOL that one has to throttle the browser to 3G to get past the redirect from the login screen.

I'm VERY disappointed (with Tesla!) to hear that you can't pull the String Data (since that is what I'm interested in also) ... and I too was unsuccessful after a little bit of poking around. It's gotta be there somewhere ... since I too can see it in the Tesla Pros app.

mcbirse commented 12 months ago

I created a placeholder for this effort: https://github.com/jasonacox/Powerwall-Dashboard/tree/main/tools/solar-only

It duplicates all of the core Dashboard project files with edits to remove pypowerwall and telegraf. Ultimately, I would like to de-dupe all of this if we can get it to work and have it as an option in the main setup.sh.

Hi @jasonacox - referencing your comment above from way back in March (has it been that long!?)...

I have also always had in mind that eventually we should de-dupe the solar-only offshoot and merge this into the main Powerwall Dashboard stack as a setup option. I believe the beta testing is complete and successful (thank you to everyone here for their help, testing & feedback!).

As such, I've been working on this to come up with a nice solution, that should be quite extensible for future changes as well, and enables a more modular approach to how we can handle different docker container requirements depending on the user's setup (i.e. for now, Powerwall vs. Solar Only).

Is it okay if I create a new branch that I can start to commit changes to? I'd rather commit here directly than to my own fork.

Is a branch and new version v2.10.1 okay with you, or would prefer to bump the major version? (there will be a lot a changes)

Here's details of what I have planned and have tested so far.

Use profiles with Docker Compose

Docker Compose supports assigning profiles to services. The profiles to use can be specified with the COMPOSE_PROFILES environment variable, and only those services assigned to the chosen profiles would be pulled & run.

The different container requirements for a Powerwall vs. Solar Only setup can be handled easily this way.

The below would be added to the compose.env file with the setup script modifying the profile list based on the user's choices.

#------------------------------------------------------------------------------#
# Powerwall-Dashboard setup profile (default or solar-only, + optional profiles)
#------------------------------------------------------------------------------#
# Comma-separated list of profiles to enable
# - must include either default or solar-only
# - can include optional profiles, e.g. weather411
#
#COMPOSE_PROFILES=default,weather411

Setup script changes

The setup.sh script will prompt the user to choose which configuration profile the dashboard should be installed with (i.e. Powerwall or Solar Only), and modify the COMPOSE_PROFILES variable in compose.env accordingly.

Powerwall Dashboard (v2.10.1) - SETUP
-----------------------------------------
Select configuration profile:

 1 - default     (Powerwall w/ Gateway on LAN)
 2 - solar-only  (No Gateway - data retrieved from Tesla Cloud)

The same would be done with the weather411 service and only added to the profile list if the user chooses to install it. The advantage of this approach being if the user did not wish to use weather411 then the container will not be pulled, and therefore will no longer be downloaded and running all the time like before.

Docker Compose powerwall.yml definition changes

The Compose powerwall.yml service definitions are modified so that services are assigned to the required profile.

NOTE: The container services that are common to all setup profiles are not assigned to any profile, so they will always be started (e.g. influxdb & grafana).

Below is an example of the updated powerwall.yml definition (note the "profiles" attribute for each service).

version: "3.5"

services:
    influxdb:
        image: influxdb:1.8
        container_name: influxdb
        hostname: influxdb
        restart: unless-stopped
        volumes:
            - type: bind
              source: ./influxdb.conf
              target: /etc/influxdb/influxdb.conf
              read_only: true
            - type: bind
              source: ./influxdb
              target: /var/lib/influxdb
        ports:
            - "${INFLUXDB_PORTS:-8086:8086}"

    pypowerwall:
        image: jasonacox/pypowerwall:0.6.2t28
        container_name: pypowerwall
        hostname: pypowerwall
        restart: unless-stopped
        user: "${PWD_USER:-1000:1000}"
        ports:
            - "${PYPOWERWALL_PORTS:-8675:8675}"
        env_file:
            - pypowerwall.env
        profiles:
            - default

    telegraf:
        image: telegraf:1.28.2
        container_name: telegraf
        hostname: telegraf
        restart: unless-stopped
        user: "${PWD_USER:-1000:1000}"
        command: [
            "telegraf",
            "--config",
            "/etc/telegraf/telegraf.conf",
            "--config-directory",
            "/etc/telegraf/telegraf.d"
        ]
        volumes:
            - type: bind
              source: ./telegraf.conf
              target: /etc/telegraf/telegraf.conf
              read_only: true
            - type: bind
              source: ./telegraf.local
              target: /etc/telegraf/telegraf.d/local.conf
              read_only: true
        depends_on:
            - influxdb
            - pypowerwall
        profiles:
            - default

    grafana:
        image: grafana/grafana:9.1.2-ubuntu
        container_name: grafana
        hostname: grafana
        restart: unless-stopped
        user: "${PWD_USER:-1000:1000}"
        volumes:
            - type: bind
              source: ./grafana
              target: /var/lib/grafana
        ports:
            - "${GRAFANA_PORTS:-9000:9000}"
        env_file:
            - grafana.env
        depends_on:
            - influxdb

    weather411:
        image: jasonacox/weather411:0.2.2
        container_name: weather411
        hostname: weather411
        restart: unless-stopped
        user: "${PWD_USER:-1000:1000}"
        volumes:
            - type: bind
              source: ./weather
              target: /var/lib/weather
              read_only: true
        ports:
            - "${WEATHER411_PORTS:-8676:8676}"
        environment:
            - WEATHERCONF=/var/lib/weather/weather411.conf
        depends_on:
            - influxdb
        profiles:
            - weather411

    tesla-history:
        image: jasonacox/tesla-history:0.1.3
        container_name: tesla-history
        hostname: tesla-history
        restart: unless-stopped
        user: "${PWD_USER:-1000:1000}"
        volumes:
            - type: bind
              source: ./tools/tesla-history
              target: /var/lib/tesla-history
        environment:
            - INFLUX_HOST=influxdb
            - INFLUX_PORT=8086
            - TESLA_CONF=/var/lib/tesla-history/tesla-history.conf
            - TESLA_AUTH=/var/lib/tesla-history/tesla-history.auth
        depends_on:
            - influxdb
        profiles:
            - solar-only

Upgrade script changes

Being mindful of the beta testers who have already installed and are using the solar-only dashboard from the placeholder location, the upgrade script has been modified to migrate existing solar-only installs to the main project location, without losing data.

jasonacox commented 11 months ago

I love this, @mcbirse !

I think it would be appropriate to bump this to v3.0.0. I could be convinced to go with v2.11.0, but it feels like a more significant update. Feel free to create a new branch.

mcbirse commented 11 months ago

Thanks @jasonacox - will do, and the changes are definitely v3 worthy!

I just need to reconcile and review some of my work on a couple of test systems, and then start committing those to the branch. Once I think it is ready for testing, then I will create a PR to main, at which point should be ready for testing and review etc.

The tesla-history script has a small update, and will need a push to Docker Hub again (small bugfix and added --version option for use in the verify.sh script).

There are more changes than just merging the solar-only offshoot as well. As I noticed any issues during testing I decided to try to address them. Includes trying to address some common problems we have faced, e.g. permissions issues, by adding some more checks/options in the setup.sh script.

Happy for review/testing/accept/reject/revise etc. Honestly the changes I've been working on kind of just snowballed into a lot more than I intended originally.

jasonacox commented 11 months ago

Nice! You always spot good improvements. Looking forward to this... 👍

mcbirse commented 11 months ago

Hi @jasonacox - I believe I have committed the majority of my changes to the v3.0.0 branch now... just pending some release notes (which could be extensive depending on how much detail we want), and then will create a PR for test and review.

When you have a chance, would you be able to run the upload.sh script from the v3.0.0 branch Powerwall-Dashboard/tools/tesla-history directory, to push v0.1.4 build of tesla-history container to Docker Hub please? This will be required to test everything. Thanks!

jasonacox commented 11 months ago

run the upload.sh script

Done! https://hub.docker.com/r/jasonacox/tesla-history/tags

With this being a major rev, I suggest we keep the release notes high level (the commits will show the deltas for anyone wanting to know the details).