jasonacox / pypowerwall

Python API for Tesla Powerwall and Solar Power Data
MIT License
134 stars 24 forks source link

Tesla Cloud Option #59

Closed jasonacox closed 9 months ago

jasonacox commented 9 months ago

This updates provides two updates:

mcbirse commented 9 months ago

Hi Jason, I have been testing this and checking that the cloud mode API responses attempt to replicate real TEG responses as closely as possible.

Can you check what does your TEG return for the following API requests?

For /api/operation mine only returns:

{
  "real_mode": "self_consumption",
  "backup_reserve_percent": 33.5
}

My TEG doesn't include the items I have commented out below, but cloud mode is returning these - does your TEG include them?

            data = {
                "real_mode": default_real_mode,
                "backup_reserve_percent": backup_reserve_percent
                # "freq_shift_load_shed_soe": 0,
                # "freq_shift_load_shed_delta_f": 0
            }

Also for /api/site_info/site_name mine only returns:

{
  "site_name": "My powerwall",
  "timezone": "Australia/Sydney"
}

But the cloud mode response for this API is returning a lot more data, should it be?

            data = {
                # "max_system_energy_kWh": nameplate_energy,
                # "max_system_power_kW": nameplate_power,
                "site_name": sitename,
                "timezone": tz
                # "max_site_meter_power_kW": max_site_meter_power_ac,
                # "min_site_meter_power_kW": min_site_meter_power_ac,
                # "nominal_system_energy_kWh": nameplate_energy,
                # "nominal_system_power_kW": nameplate_power,
                # "panel_max_current": None,
                # "grid_code": {
                #     "grid_code": None,
                #     "grid_voltage_setting": None,
                #     "grid_freq_setting": None,
                #     "grid_phase_setting": None,
                #     "country": None,
                #     "state": None,
                #     "utility": utility
                # }
            }
jasonacox commented 9 months ago

Ha! We are looking at the same thing. I spotted the mismatch on the APIs for /site_info - new commit

And yes, my /api/operations is more verbose - which is odd.

>>> pw.poll('/api/operation')
'{"real_mode":"self_consumption","backup_reserve_percent":24,"freq_shift_load_shed_soe":65,"freq_shift_load_shed_delta_f":-0.32}'

I'm basically doing side-by-side with one python session using the local connection, the other the cloud, and running the same pw.poll(API) calls for each API to test differences.

mcbirse commented 9 months ago

And yes, my /api/operations is more verbose - which is odd.

Interesting - all good!

Note I committed a fix too, for the backup reserve percent which needs scaling applied.

jasonacox commented 9 months ago

I saw the SOE fix, awesome!! I was testing that about the same time your commit came through. One interesting thing I have noticed is that the frequency of SOE update on the cloud is a lot less than local (looks like it is close to 1 minute updates). But it seems like the power data (pw.power()) updates about every 1-3 seconds but I do notice that they keep the data to the 10's (ie. 1310 instead of 1312). It's fine, just interesting difference.

mcbirse commented 9 months ago

Actually I haven't checked the soe ("percentage_charged" response value) yet, only the backup reserve percent... my Powerwall is at 100% and I need the sun to go down to test it!! 😄

I have noticed the update time differences as well. "SITE_DATA" (live_status) updates quite frequently, but the others like "SITE_CONFIG" could be delayed, even up to 30mins. If the value exists in the live_status response obviously it is best to use that.

jasonacox commented 9 months ago

I'm going to update /api/operation go match yours since that is a valid payload and we can't get the freq_shift data from the cloud.

jasonacox commented 9 months ago

I switched /api/system_status/soe to use SITE_SUMMARY - it seems a bit faster, but it may just be me. Committed the last bit for tonight for me - feel free to test, edit, commit any fixes you see if you have time.

        elif api in ['/api/system_status/soe']:
            battery = self.get_battery()
            percentage_charged = lookup(battery, ("response", "percentage_charged")) or 0
            # percentage_charged is scaled to keep 5% buffer at bottom
            soe = (percentage_charged + (5 / 0.95)) * 0.95
            data = {
                "percentage": soe,
            }
mcbirse commented 9 months ago

Great! I will continue testing later tonight.

mcbirse commented 9 months ago

I switched /api/system_status/soe to use SITE_SUMMARY - it seems a bit faster, but it may just be me.

Not just you - since you mentioned it I tested and also observed it appears to update faster!

jasonacox commented 9 months ago

Good fixes @mcbirse !

I'm looking to see what else could pull from the Cloud (possibly into /vitals).

get_site_power() (TeslaPy SITE_DATA) has:

"energy_left": 21276.894736842103
"total_pack_energy": 25939

Feedback from community is that they would like to keep PW capacity to plot degredation. I believe we can get this with total_pack_energy. Unfortunately it isn't separated by Powerwall. If we add this to vitals it won't match.

This data point is already in pw.system_status(). However, we need to add it to the proxy for the Dashboard. I'm going to add this as a new datapoint in the aggregate http://pypowerwall:8675/pod API since that already has battery information. Also, it is already in our telegraf config to ingest in influxdb. We will just need to add another CQ and update dashboards.

get_battery() (TeslaPy SITE_SUMMARY) has:

'total_pack_energy': 25786

I'm going to see if this updates faster than the SITE_DATA version, but it doesn't matter if this one lags.

get_site_config() (TeslaPy SITE_CONFIG) has:

                "inverters": 
                    {
                        "device_id": "xxxxxxxxxxxxxxxxxx",
                        "din": "xxxxxxxxx",
                        "is_active": true,
                        "site_id": "xxxxxxxxxxxxxxxxxx",
                    }
                ],

I don't know if "is_active" would be useful for the inverters.

Other Interesting Cloud Data

>>> pw.Tesla.site.api("ENERGY_SITE_BACKUP_TIME_REMAINING")
{'response': {'time_remaining_hours': 8.70358620385033}}

That could be an interesting metric for the dashboard. I'll add it as a cloud get_time_remaining() function and add it to the proxy's http://pypowerwall:8675/pod payload as well.

There may be others (reference) but nothing jumped out at me.

mcbirse commented 9 months ago

Sounds good. I previously went through all of the TeslaPy endpoints related to "energy_sites" and basically came up with the same list. 😄

jasonacox commented 9 months ago

I'm running a Powerwall-Dashboard instance in cloud-only mode. No issue so far. I need to figure out how to simulate a solar-only scenario to test that, but I think we are close to merge v0.7.0. I'll work on the docs.

image
jasonacox commented 9 months ago

I think I need to also address the case where users have multiple sites and need to pick the right one. Right now, it assume the first one:

        # Get site info
        # TODO: Select the right site
        self.site = self.getsites()[0]

I can have the setup present the list if there are more than one and record the user selection. I'll add a function to allow changing the site in pypowerwall (cloud.py) and an environmental setting for the proxy.

jasonacox commented 9 months ago

Worked on setup mode (python3 -m pypowerwall setup) for slightly easier edit flow and siteid (energy_site_id) selection.

jasonacox commented 9 months ago

Tracking the Powerwall full-charge (nominal_full_pack_energy) capacity now. This will be the case with v0.7.0 going forward for either local or cloud mode.

image
mcbirse commented 9 months ago

This is looking fantastic. I will do some more testing of the latest changes.

Also I was now thinking... we may want to consider reverting the changes with COMPOSE_PROFILES and profile selection back to how it was before, since it will be irrelevant? This also resolves some compatibility issues where old docker compose versions do not support profiles.

The mode in which Powerwall-Dashboard runs (with TEG vs solar-only/PW3) would be determined by pypowerwall configuration only. All docker container requirements would be the same I believe.

Tesla-history script would still have it's place to fill in missing data (internet outage) or fetch historical data, but would no longer need to run as a docker container.

jasonacox commented 9 months ago

Thanks @mcbirse ! I found something interesting. The Tesla App sends a counter parameter to query SITE_DATA (eg GET api/1/energy_sites/{site_id}/live_status?counter={counter}&language=en). I added a counter and it seems like the updates are happening sooner (minus the 5s cache I added). I tested toggling"off grid" and the the updates were within the 5s cache. Battery level still seems to be slow to update and SITE_SUMMARY seem faster.

I agree with your proposal. This will simplify setup. The only thing needed to prompt pypowerwall to use the cloud is to have an empty value for PW_HOST. Our setup.sh can handle that and would also need to run the Tesla setup to record the cloud auth token.

I added a mock /vitals payload that simulates alerts. I have it adding "SystemConnectedToGrid", "ScheduledIslandContactorOpen" and "UnscheduledIslandContactorOpen" alerts via the "island_status" data we get. It got me to thinking that we could actually add more non-standard alerts to indicate system state, for example, "storm_mode_enabled", "backup_capable", "self_consumption mode", etc., which would be nice in the time series chart we are using. I don't think it needs to be in this version, but something to consider.

Comparison Between Local and Cloud - 24 Hours

Local

image

Cloud

image

Metrics are close. Cloud still doesn't match the update fidelity of the local API, but the graphs are generally the same.

mcbirse commented 9 months ago

Thanks @mcbirse ! I found something interesting. The Tesla App sends a counter parameter to query SITE_DATA (eg GET api/1/energy_sites/{site_id}/live_status?counter={counter}&language=en). I added a counter and it seems like the updates are happening sooner (minus the 5s cache I added).

Awesome find!

I still need to test some more... I didn't have much time tonight and started testing from the setup process and simulating a multiple site scenario, and found a few issues. I will commit a few changes which should hopefully resolve these.

jasonacox commented 9 months ago

Awesome. Excellent fixes!!! I started with the (bad) idea of using the index of the response JSON and switched to using energy_site_id as you did in Tesla_history. Thanks for fixing these. 🙏

jasonacox commented 9 months ago

While running a Powerwall-Dashboard stack based on this v0.7.0 PR (pyPowerwall [0.7.0] Proxy Server [t33]), I noticed two errors:

From what I could tell, there are times when the refresh of the data all hit at the same time. May of the aggregate functions make multiple calls to assemble the data. Because the cache is set with the same TTL for all these calls, they all expire at the same time and a thundering herd of API calls hits. I made a few tweaks in the above commit which staggers the TTL using this:

"SITE_DATA" = pwcacheexpire           # Live power data, most important for currency
"SITE_SUMMARY" = pwcacheexpire + 1    # Battery data
"TIME_REMAINING" = pwcacheexpire + 3  # Estimate data ENERGY_SITE_BACKUP_TIME_REMAINING

"SITE_CONFIG" = SITE_CONFIG_TTL = 59  # Mostly static config data, hardcoded 1m

If pwcacheexpire == 0, it removes all cache for each of these, still allowing a 'no cache' mode. After making these changes, I am no longer seeing the 429 error with the default pwcacheexpire=5.

This shows up in the log but for all the API calls (e.g. self.site.api("SITE_SUMMARY",language="en")) but isn't fatal. the token gets renewed but the stack trace still shows up. I need to research how best to capture this without muting other errors.

jasonacox commented 9 months ago

Published beta container image for anyone wanting to test: jasonacox/pypowerwall:0.7.0t33beta

mcbirse commented 9 months ago

With the changes made regarding TTL I don't think this has fixed the root cause of the issue.

I actually noticed this issue a few days ago but hadn't had time to work on a fix. I have worked out a good solution this afternoon however, so once I finish testing will commit the changes I recommend.

This probably needs an explanation though.

From what I could tell, there are times when the refresh of the data all hit at the same time.

One of the reasons this is occurring is because telegraf requests all of the input URLs simultaneously (every 5 seconds) and pypowerwall is running as a multi-threaded HTTP server.

So, to service the telegraf requests, then we have multiple pypowerwall threads each sending a cloud API request to the Tesla servers - and this could be the same API request - and since they are all sent at the same time there has been no time for a response to be cached.

I have a change almost ready that essentially adds a mutex lock around the cloud API requests. This means only one thread would send the API request, while the other threads will wait, and when the lock is released those waiting threads will return cached data.

It's much friendlier on the Tesla servers, and would probably resolve the 429 error and may also mean the TTL changes are not required.

I have tested with the TTL changes and confirmed that this issue is still present. Below is some log output where you can see multiple of the same cloud API request is sent in the same second (e.g. look for "Fetching new data for SITE_CONFIG" etc. which is being sent simultaneously in different threads).

12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /freq HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /temps/pw HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /strings HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /soe HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/system_status/soe
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_SUMMARY
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /aggregates HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/meters/aggregates
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_DATA
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /pod HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /alerts/pw HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_DATA
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/system_status/grid_status
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/operation
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/system_status
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_SUMMARY
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Fetching new data for ENERGY_SITE_BACKUP_TIME_REMAINING
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG

Here is the difference with the changes I'm working on... only 1 thread sends the SITE_CONFIG request (for example), the other threads wait and then return cached data once the 1st threads gets the response. It happens within a fraction of the second.

12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /alerts/pw HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /strings HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /soe HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/system_status/soe
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_SUMMARY
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /aggregates HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/meters/aggregates
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_DATA
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /pod HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /temps/pw HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /freq HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/operation
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/system_status
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_SUMMARY
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Fetching new data for ENERGY_SITE_BACKUP_TIME_REMAINING
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/system_status/grid_status
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA

Also I was testing what happens when the Internet drops, and am tweaking retries/timeouts as it was not great.

jasonacox commented 9 months ago

I have a change almost ready that essentially adds a mutex lock around the cloud API requests.

This is great! ❤️ I do think the App is sending multiple concurrent requests as well, just nothing like the telegraf blast. The TTL hack did enough to eliminate the throttling from Tesla in my testing, but that doesn't mean it wasn't still extreme and an unkind level of requests (compared to the App). Getting this mutex solution in place should mitigate that and is a much cleaner solution than the TTL logic. Thanks!

I didn't have a lot of time to spend on it today. I did push a few small changes to our stats API and logging to help indicate when the proxy is in cloud mode and to provide additional information related to that for troubleshooting help. Also, testing using this new "cloud mode" code is working for multiple Dashboard instances I'm testing on different OSs. Submit your mutex changes and I'll roll it out to my test platforms too.

Also, I started updating Powerwall-Dashboard (mainly powerwall.yml to map the TeslaPy auth and site files) as needed for my testing. Added some bits to verify.sh as well. New Branch is v4.0.0: https://github.com/jasonacox/Powerwall-Dashboard/commit/a0ed67cade9ce5f791766844ba8dbeaf95dbe9c6

mcbirse commented 9 months ago

Hi Jason,

Some notes on my changes as they might need further explanation. Hopefully I have not made any breaking changes. My testing shows all working well, but please test and let me know if you find any issues!

jasonacox commented 9 months ago

This is brilliant, @mcbirse !! I made a comment in your commit about abstracting the cache and mutex code in each of the get_*() functions into a central api call management function, but that is minor and could be handled later if it makes sense.

Give it a try: jasonacox/pypowerwall:0.7.1t34

Testing now...

✅ MacOS - local mode, cloud mode ✅ RPi - local mode, cloud mode

mcbirse commented 9 months ago

I agree, putting that replicated code into a central api call management function definitely makes sense. I was considering that but left the changes in each function for easier comparison for now.

jasonacox commented 9 months ago

I like keeping the separate get_*() functions named after their logical payload (the TeslaPy mapping isn't very intuitive IMHO). It also abstracts our API from TeslaPy changes.

I created a central function _site_api() and I'm running test on it now. Please check me on this:

    def _site_api(self, name, ttl, **args):
        """
        Get site data from Tesla Cloud
            name - API name
            ttl - cache time to live in seconds
            args - Additional API arguments

        Returns (response, cached)
        """
        if self.tesla is None:
            return (None, False)
        # Check for lock and wait if api request already sent
        if name in self.apilock:
            locktime = time.perf_counter()
            while self.apilock[name]:
                time.sleep(0.2)
                if time.perf_counter() >= locktime + self.timeout:
                    return (None, False)
        # Check to see if we have cached data
        if name in self.pwcache:
            if self.pwcachetime[name] > time.perf_counter() - ttl:
                return (self.pwcache[name], True)
        try:
            # Set lock
            self.apilock[name] = True
            response = self.site.api(name,args)
        except Exception as err:
            log.error(f"ERROR: Failed to retrieve {name} - {repr(err)}")
            response = None
        else:
            self.pwcache[name] = response
            self.pwcachetime[name] = time.perf_counter()
        finally:
            # Release lock
            self.apilock[name] = False
            return (response, False)

    def get_battery(self):
        """
        Get site battery data from Tesla Cloud
        ...
        """
        # GET api/1/energy_sites/{site_id}/site_status
        (response, cached) =  self._site_api("SITE_SUMMARY", 
                                             self.pwcacheexpire, language="en")
        return response

    def get_site_power(self):
        """
        Get site power data from Tesla Cloud
        ...
        """
        # GET api/1/energy_sites/{site_id}/live_status?counter={counter}&language=en 
        (response, cached) =  self._site_api("SITE_DATA", 
                                             self.pwcacheexpire, counter=self.counter, language="en")
        if not cached:
            self.counter = (self.counter + 1) % COUNTER_MAX
        return response

    def get_site_config(self):
        """
        Get site configuration data from Tesla Cloud
        ...
        """
        # GET api/1/energy_sites/{site_id}/site_info
        (response, cached) =  self._site_api("SITE_CONFIG", 
                                             SITE_CONFIG_TTL, language="en")
        return response

    def get_time_remaining(self):
        """
        Get backup time remaining from Tesla Cloud

        {'response': {'time_remaining_hours': 7.909122698326978}}
        """
        # GET api/1/energy_sites/{site_id}/backup_time_remaining
        (response, cached) =  self._site_api("ENERGY_SITE_BACKUP_TIME_REMAINING",
                                                self.pwcacheexpire, language="en")
        return response
mcbirse commented 9 months ago

At first glance looks good to me!

I like the idea that it returns whether the response was a cached response or not.

Also I think it might be helpful to add debug log output showing when we actually return the response data, including whether that was a cached response or not - what do you think? ERROR log will currently output already on timeouts.

i.e. DEBUG logs shows when pypowerwall receives a request, the required cloud request, but not when we get the response (or return cached response). I had added additionally logging for this when testing which was invaluable.

12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /alerts/pw HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /freq HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /strings HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /aggregates HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /pod HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /temps/pw HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /vitals
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /soe HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/system_status/soe
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/meters/aggregates
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/system_status/grid_status
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/operation
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG]  -- cloud: Request for /api/system_status
jasonacox commented 9 months ago

I pushed the _site_api() update with addition debug. Please feel free to adjust.

jasonacox commented 9 months ago

12/28/2023 11:58:10 PM [pypowerwall.cloud] [ERROR] ERROR: Failed to retrieve SITE_DATA - ConnectionError(ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')))

I should remove the "ERROR" prefix in the log.error() calls. 😀 But otherwise, logging is working well.

jasonacox commented 9 months ago

This is looking good!

Successful tests using jasonacox/pypowerwall:0.7.1t35beta pre-release.

✅ RPi - cloud mode and local mode ✅ MacOS - cloud mode and local mode ✅ WinOS - cloud mode and local mode ✅ Ubuntu Linux - cloud mode and local mode

As a note, helpful page for confirmation of running proxy: http://localhost:8675/help

I'm going to squash, merge to main and push pypowerwall v0.7.1 to PyPI for next round of non-beta testing.

@mcbirse great job! Feel free to continue to branch and submit updates if we need to v0.7.2.

jasonacox commented 9 months ago

v0.7.1 Released: https://pypi.org/project/pypowerwall/0.7.1/