Closed jasonacox closed 9 months ago
Hi Jason, I have been testing this and checking that the cloud mode API responses attempt to replicate real TEG responses as closely as possible.
Can you check what does your TEG return for the following API requests?
For /api/operation
mine only returns:
{
"real_mode": "self_consumption",
"backup_reserve_percent": 33.5
}
My TEG doesn't include the items I have commented out below, but cloud mode is returning these - does your TEG include them?
data = {
"real_mode": default_real_mode,
"backup_reserve_percent": backup_reserve_percent
# "freq_shift_load_shed_soe": 0,
# "freq_shift_load_shed_delta_f": 0
}
Also for /api/site_info/site_name
mine only returns:
{
"site_name": "My powerwall",
"timezone": "Australia/Sydney"
}
But the cloud mode response for this API is returning a lot more data, should it be?
data = {
# "max_system_energy_kWh": nameplate_energy,
# "max_system_power_kW": nameplate_power,
"site_name": sitename,
"timezone": tz
# "max_site_meter_power_kW": max_site_meter_power_ac,
# "min_site_meter_power_kW": min_site_meter_power_ac,
# "nominal_system_energy_kWh": nameplate_energy,
# "nominal_system_power_kW": nameplate_power,
# "panel_max_current": None,
# "grid_code": {
# "grid_code": None,
# "grid_voltage_setting": None,
# "grid_freq_setting": None,
# "grid_phase_setting": None,
# "country": None,
# "state": None,
# "utility": utility
# }
}
Ha! We are looking at the same thing. I spotted the mismatch on the APIs for /site_info - new commit
And yes, my /api/operations is more verbose - which is odd.
>>> pw.poll('/api/operation')
'{"real_mode":"self_consumption","backup_reserve_percent":24,"freq_shift_load_shed_soe":65,"freq_shift_load_shed_delta_f":-0.32}'
I'm basically doing side-by-side with one python session using the local connection, the other the cloud, and running the same pw.poll(API)
calls for each API to test differences.
And yes, my /api/operations is more verbose - which is odd.
Interesting - all good!
Note I committed a fix too, for the backup reserve percent which needs scaling applied.
I saw the SOE fix, awesome!! I was testing that about the same time your commit came through. One interesting thing I have noticed is that the frequency of SOE update on the cloud is a lot less than local (looks like it is close to 1 minute updates). But it seems like the power data (pw.power()
) updates about every 1-3 seconds but I do notice that they keep the data to the 10's (ie. 1310 instead of 1312). It's fine, just interesting difference.
Actually I haven't checked the soe ("percentage_charged" response value) yet, only the backup reserve percent... my Powerwall is at 100% and I need the sun to go down to test it!! 😄
I have noticed the update time differences as well. "SITE_DATA" (live_status) updates quite frequently, but the others like "SITE_CONFIG" could be delayed, even up to 30mins. If the value exists in the live_status response obviously it is best to use that.
I'm going to update /api/operation
go match yours since that is a valid payload and we can't get the freq_shift data from the cloud.
I switched /api/system_status/soe
to use SITE_SUMMARY
- it seems a bit faster, but it may just be me. Committed the last bit for tonight for me - feel free to test, edit, commit any fixes you see if you have time.
elif api in ['/api/system_status/soe']:
battery = self.get_battery()
percentage_charged = lookup(battery, ("response", "percentage_charged")) or 0
# percentage_charged is scaled to keep 5% buffer at bottom
soe = (percentage_charged + (5 / 0.95)) * 0.95
data = {
"percentage": soe,
}
Great! I will continue testing later tonight.
I switched
/api/system_status/soe
to useSITE_SUMMARY
- it seems a bit faster, but it may just be me.
Not just you - since you mentioned it I tested and also observed it appears to update faster!
Good fixes @mcbirse !
I'm looking to see what else could pull from the Cloud (possibly into /vitals).
"energy_left": 21276.894736842103
"total_pack_energy": 25939
Feedback from community is that they would like to keep PW capacity to plot degredation. I believe we can get this with total_pack_energy
. Unfortunately it isn't separated by Powerwall. If we add this to vitals
it won't match.
This data point is already in pw.system_status()
. However, we need to add it to the proxy for the Dashboard. I'm going to add this as a new datapoint in the aggregate http://pypowerwall:8675/pod API since that already has battery information. Also, it is already in our telegraf config to ingest in influxdb. We will just need to add another CQ and update dashboards.
'total_pack_energy': 25786
I'm going to see if this updates faster than the SITE_DATA version, but it doesn't matter if this one lags.
"inverters":
{
"device_id": "xxxxxxxxxxxxxxxxxx",
"din": "xxxxxxxxx",
"is_active": true,
"site_id": "xxxxxxxxxxxxxxxxxx",
}
],
I don't know if "is_active" would be useful for the inverters.
>>> pw.Tesla.site.api("ENERGY_SITE_BACKUP_TIME_REMAINING")
{'response': {'time_remaining_hours': 8.70358620385033}}
That could be an interesting metric for the dashboard. I'll add it as a cloud get_time_remaining()
function and add it to the proxy's http://pypowerwall:8675/pod payload as well.
There may be others (reference) but nothing jumped out at me.
Sounds good. I previously went through all of the TeslaPy endpoints related to "energy_sites" and basically came up with the same list. 😄
I'm running a Powerwall-Dashboard instance in cloud-only mode. No issue so far. I need to figure out how to simulate a solar-only scenario to test that, but I think we are close to merge v0.7.0. I'll work on the docs.
I think I need to also address the case where users have multiple sites and need to pick the right one. Right now, it assume the first one:
# Get site info
# TODO: Select the right site
self.site = self.getsites()[0]
I can have the setup present the list if there are more than one and record the user selection. I'll add a function to allow changing the site in pypowerwall (cloud.py) and an environmental setting for the proxy.
Worked on setup mode (python3 -m pypowerwall setup
) for slightly easier edit flow and siteid (energy_site_id
) selection.
Tracking the Powerwall full-charge (nominal_full_pack_energy
) capacity now. This will be the case with v0.7.0 going forward for either local or cloud mode.
This is looking fantastic. I will do some more testing of the latest changes.
Also I was now thinking... we may want to consider reverting the changes with COMPOSE_PROFILES and profile selection back to how it was before, since it will be irrelevant? This also resolves some compatibility issues where old docker compose versions do not support profiles.
The mode in which Powerwall-Dashboard runs (with TEG vs solar-only/PW3) would be determined by pypowerwall configuration only. All docker container requirements would be the same I believe.
Tesla-history script would still have it's place to fill in missing data (internet outage) or fetch historical data, but would no longer need to run as a docker container.
Thanks @mcbirse ! I found something interesting. The Tesla App sends a counter parameter to query SITE_DATA (eg GET api/1/energy_sites/{site_id}/live_status?counter={counter}&language=en
). I added a counter and it seems like the updates are happening sooner (minus the 5s cache I added). I tested toggling"off grid" and the the updates were within the 5s cache. Battery level still seems to be slow to update and SITE_SUMMARY seem faster.
I agree with your proposal. This will simplify setup. The only thing needed to prompt pypowerwall to use the cloud is to have an empty value for PW_HOST. Our setup.sh
can handle that and would also need to run the Tesla setup to record the cloud auth token.
I added a mock /vitals
payload that simulates alerts. I have it adding "SystemConnectedToGrid", "ScheduledIslandContactorOpen" and "UnscheduledIslandContactorOpen" alerts via the "island_status" data we get. It got me to thinking that we could actually add more non-standard alerts to indicate system state, for example, "storm_mode_enabled", "backup_capable", "self_consumption mode", etc., which would be nice in the time series chart we are using. I don't think it needs to be in this version, but something to consider.
Local
Cloud
Metrics are close. Cloud still doesn't match the update fidelity of the local API, but the graphs are generally the same.
Thanks @mcbirse ! I found something interesting. The Tesla App sends a counter parameter to query SITE_DATA (eg
GET api/1/energy_sites/{site_id}/live_status?counter={counter}&language=en
). I added a counter and it seems like the updates are happening sooner (minus the 5s cache I added).
Awesome find!
I still need to test some more... I didn't have much time tonight and started testing from the setup process and simulating a multiple site scenario, and found a few issues. I will commit a few changes which should hopefully resolve these.
Awesome. Excellent fixes!!! I started with the (bad) idea of using the index of the response JSON and switched to using energy_site_id as you did in Tesla_history. Thanks for fixing these. 🙏
While running a Powerwall-Dashboard stack based on this v0.7.0 PR (pyPowerwall [0.7.0] Proxy Server [t33]
), I noticed two errors:
From what I could tell, there are times when the refresh of the data all hit at the same time. May of the aggregate functions make multiple calls to assemble the data. Because the cache is set with the same TTL for all these calls, they all expire at the same time and a thundering herd of API calls hits. I made a few tweaks in the above commit which staggers the TTL using this:
"SITE_DATA" = pwcacheexpire # Live power data, most important for currency
"SITE_SUMMARY" = pwcacheexpire + 1 # Battery data
"TIME_REMAINING" = pwcacheexpire + 3 # Estimate data ENERGY_SITE_BACKUP_TIME_REMAINING
"SITE_CONFIG" = SITE_CONFIG_TTL = 59 # Mostly static config data, hardcoded 1m
If pwcacheexpire == 0
, it removes all cache for each of these, still allowing a 'no cache' mode. After making these changes, I am no longer seeing the 429 error with the default pwcacheexpire=5
.
This shows up in the log but for all the API calls (e.g. self.site.api("SITE_SUMMARY",language="en")
) but isn't fatal. the token gets renewed but the stack trace still shows up. I need to research how best to capture this without muting other errors.
Published beta container image for anyone wanting to test: jasonacox/pypowerwall:0.7.0t33beta
With the changes made regarding TTL I don't think this has fixed the root cause of the issue.
I actually noticed this issue a few days ago but hadn't had time to work on a fix. I have worked out a good solution this afternoon however, so once I finish testing will commit the changes I recommend.
This probably needs an explanation though.
From what I could tell, there are times when the refresh of the data all hit at the same time.
One of the reasons this is occurring is because telegraf requests all of the input URLs simultaneously (every 5 seconds) and pypowerwall is running as a multi-threaded HTTP server.
So, to service the telegraf requests, then we have multiple pypowerwall threads each sending a cloud API request to the Tesla servers - and this could be the same API request - and since they are all sent at the same time there has been no time for a response to be cached.
I have a change almost ready that essentially adds a mutex lock around the cloud API requests. This means only one thread would send the API request, while the other threads will wait, and when the lock is released those waiting threads will return cached data.
It's much friendlier on the Tesla servers, and would probably resolve the 429 error and may also mean the TTL changes are not required.
I have tested with the TTL changes and confirmed that this issue is still present. Below is some log output where you can see multiple of the same cloud API request is sent in the same second (e.g. look for "Fetching new data for SITE_CONFIG" etc. which is being sent simultaneously in different threads).
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /freq HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /temps/pw HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /strings HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /soe HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/system_status/soe
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_SUMMARY
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /aggregates HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/meters/aggregates
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_DATA
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /pod HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 06:51:25 PM [proxy] [DEBUG] 172.18.0.4 "GET /alerts/pw HTTP/1.1" 200 -
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_DATA
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/system_status/grid_status
12/27/2023 06:51:25 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/operation
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/system_status
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_SUMMARY
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Fetching new data for ENERGY_SITE_BACKUP_TIME_REMAINING
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 06:51:26 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
Here is the difference with the changes I'm working on... only 1 thread sends the SITE_CONFIG request (for example), the other threads wait and then return cached data once the 1st threads gets the response. It happens within a fraction of the second.
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /alerts/pw HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_CONFIG
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /strings HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /soe HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/system_status/soe
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_SUMMARY
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /aggregates HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/meters/aggregates
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Fetching new data for SITE_DATA
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /pod HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /temps/pw HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/27/2023 04:40:50 PM [proxy] [DEBUG] 172.18.0.4 "GET /freq HTTP/1.1" 200 -
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/operation
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/system_status
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_CONFIG
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_SUMMARY
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Fetching new data for ENERGY_SITE_BACKUP_TIME_REMAINING
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/system_status/grid_status
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
12/27/2023 04:40:50 PM [pypowerwall.cloud] [DEBUG] Return cached response for SITE_DATA
Also I was testing what happens when the Internet drops, and am tweaking retries/timeouts as it was not great.
I have a change almost ready that essentially adds a mutex lock around the cloud API requests.
This is great! ❤️ I do think the App is sending multiple concurrent requests as well, just nothing like the telegraf blast. The TTL hack did enough to eliminate the throttling from Tesla in my testing, but that doesn't mean it wasn't still extreme and an unkind level of requests (compared to the App). Getting this mutex solution in place should mitigate that and is a much cleaner solution than the TTL logic. Thanks!
I didn't have a lot of time to spend on it today. I did push a few small changes to our stats API and logging to help indicate when the proxy is in cloud mode and to provide additional information related to that for troubleshooting help. Also, testing using this new "cloud mode" code is working for multiple Dashboard instances I'm testing on different OSs. Submit your mutex changes and I'll roll it out to my test platforms too.
Also, I started updating Powerwall-Dashboard (mainly powerwall.yml to map the TeslaPy auth and site files) as needed for my testing. Added some bits to verify.sh as well. New Branch is v4.0.0: https://github.com/jasonacox/Powerwall-Dashboard/commit/a0ed67cade9ce5f791766844ba8dbeaf95dbe9c6
Hi Jason,
Some notes on my changes as they might need further explanation. Hopefully I have not made any breaking changes. My testing shows all working well, but please test and let me know if you find any issues!
Limit cloud API requests to single thread To stop multiple threads simultaneously sending the same Tesla cloud API requests, a mutex lock as been added so only a single thread would send the cloud API requests, while other threads wait. Once response received by 1 thread the lock is removed, then all waiting threads would receive the cached data instead of sending the request to the cloud themselves.
Removed the staggered TTL changes Except for SITE_CONFIG which has been left at 1 minute cache time.
Reduce http timeout and remove retries There is no point having the timeout set to anything greater than 5 seconds in general. This is because telegraf by default sends requests every 5 seconds and will not wait longer than that for a response. Also, for the same reason http retries are not required. Retries have been removed and default timeout reduced to 5 seconds (this can still be adjusted by PW_TIMEOUT setting however).
Fix exceptions and invalid return data These can occurr when the connection to the Tesla cloud is lost (i.e. Internet goes down), and some invalid return data when down was still being returned by pypowerwall - now will generally return None or empty data instead, so as not to contaminate Powerall-Dashboard with invalid data.
Change elapsed time measurements to monotonic clock For measuring elapsed time (i.e. for cache expire time, or thread wait timeouts), use of time.time() (current system time) was changed to time.perf_counter() instead. time.time() can be affected by system clock changes (could go backwards), whereas time.perf_counter() is recommended for elapsed time measurements as it is monotonic and only ever goes forward, and would not be affected by system clock changes.
Incremented version numbers Bumped version numbers in pypowerwall, proxy server, and cloud module
This is brilliant, @mcbirse !! I made a comment in your commit about abstracting the cache and mutex code in each of the get_*() functions into a central api call management function, but that is minor and could be handled later if it makes sense.
Give it a try: jasonacox/pypowerwall:0.7.1t34
Testing now...
✅ MacOS - local mode, cloud mode ✅ RPi - local mode, cloud mode
I agree, putting that replicated code into a central api call management function definitely makes sense. I was considering that but left the changes in each function for easier comparison for now.
I like keeping the separate get_*() functions named after their logical payload (the TeslaPy mapping isn't very intuitive IMHO). It also abstracts our API from TeslaPy changes.
I created a central function _site_api()
and I'm running test on it now. Please check me on this:
def _site_api(self, name, ttl, **args):
"""
Get site data from Tesla Cloud
name - API name
ttl - cache time to live in seconds
args - Additional API arguments
Returns (response, cached)
"""
if self.tesla is None:
return (None, False)
# Check for lock and wait if api request already sent
if name in self.apilock:
locktime = time.perf_counter()
while self.apilock[name]:
time.sleep(0.2)
if time.perf_counter() >= locktime + self.timeout:
return (None, False)
# Check to see if we have cached data
if name in self.pwcache:
if self.pwcachetime[name] > time.perf_counter() - ttl:
return (self.pwcache[name], True)
try:
# Set lock
self.apilock[name] = True
response = self.site.api(name,args)
except Exception as err:
log.error(f"ERROR: Failed to retrieve {name} - {repr(err)}")
response = None
else:
self.pwcache[name] = response
self.pwcachetime[name] = time.perf_counter()
finally:
# Release lock
self.apilock[name] = False
return (response, False)
def get_battery(self):
"""
Get site battery data from Tesla Cloud
...
"""
# GET api/1/energy_sites/{site_id}/site_status
(response, cached) = self._site_api("SITE_SUMMARY",
self.pwcacheexpire, language="en")
return response
def get_site_power(self):
"""
Get site power data from Tesla Cloud
...
"""
# GET api/1/energy_sites/{site_id}/live_status?counter={counter}&language=en
(response, cached) = self._site_api("SITE_DATA",
self.pwcacheexpire, counter=self.counter, language="en")
if not cached:
self.counter = (self.counter + 1) % COUNTER_MAX
return response
def get_site_config(self):
"""
Get site configuration data from Tesla Cloud
...
"""
# GET api/1/energy_sites/{site_id}/site_info
(response, cached) = self._site_api("SITE_CONFIG",
SITE_CONFIG_TTL, language="en")
return response
def get_time_remaining(self):
"""
Get backup time remaining from Tesla Cloud
{'response': {'time_remaining_hours': 7.909122698326978}}
"""
# GET api/1/energy_sites/{site_id}/backup_time_remaining
(response, cached) = self._site_api("ENERGY_SITE_BACKUP_TIME_REMAINING",
self.pwcacheexpire, language="en")
return response
At first glance looks good to me!
I like the idea that it returns whether the response was a cached response or not.
Also I think it might be helpful to add debug log output showing when we actually return the response data, including whether that was a cached response or not - what do you think? ERROR log will currently output already on timeouts.
i.e. DEBUG logs shows when pypowerwall receives a request, the required cloud request, but not when we get the response (or return cached response). I had added additionally logging for this when testing which was invaluable.
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /alerts/pw HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /freq HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /strings HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /aggregates HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /pod HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /temps/pw HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /vitals
12/29/2023 05:50:40 PM [proxy] [DEBUG] 172.18.0.1 "GET /soe HTTP/1.1" 200 -
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/system_status/soe
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/meters/aggregates
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/system_status/grid_status
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/operation
12/29/2023 05:50:40 PM [pypowerwall.cloud] [DEBUG] -- cloud: Request for /api/system_status
I pushed the _site_api()
update with addition debug. Please feel free to adjust.
12/28/2023 11:58:10 PM [pypowerwall.cloud] [ERROR] ERROR: Failed to retrieve SITE_DATA - ConnectionError(ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')))
I should remove the "ERROR" prefix in the log.error() calls. 😀 But otherwise, logging is working well.
This is looking good!
Successful tests using jasonacox/pypowerwall:0.7.1t35beta pre-release.
✅ RPi - cloud mode and local mode ✅ MacOS - cloud mode and local mode ✅ WinOS - cloud mode and local mode ✅ Ubuntu Linux - cloud mode and local mode
As a note, helpful page for confirmation of running proxy: http://localhost:8675/help
I'm going to squash, merge to main and push pypowerwall v0.7.1 to PyPI for next round of non-beta testing.
@mcbirse great job! Feel free to continue to branch and submit updates if we need to v0.7.2.
v0.7.1 Released: https://pypi.org/project/pypowerwall/0.7.1/
This updates provides two updates: