jasonacox / Powerwall-Dashboard

Grafana Monitoring Dashboard for Tesla Solar and Powerwall Systems
MIT License
268 stars 57 forks source link

Connection full with 0.10.x proxy #484

Closed jgleigh closed 2 weeks ago

jgleigh commented 3 weeks ago

@jasonacox Got 45 connection warnings within 2 seconds. Wondering if some decision logic changed in the latest release.

2024-06-10 15:27:58 06/10/2024 03:27:58 PM [proxy] [INFO] pyPowerwall [0.10.2] Proxy Server [t59] - HTTP Port 8675

5x 2024-06-10 18:34:52 06/10/2024 06:34:52 PM [pypowerwall] [ERROR] Failed to get /api/system_status/grid_status

45x 2024-06-10 18:35:01 06/10/2024 06:35:01 PM [urllib3.connectionpool] [WARNING] Connection pool is full, discarding connection: 192.168.91.1. Connection pool size: 15

jasonacox commented 3 weeks ago

Thanks @jgleigh - It is running the same logic unless you switched into "full" (e.g. Powerwall 3) mode. Go to http://localhost:8675/help and look for tedapi_mode (full or hybrid).

jgleigh commented 3 weeks ago

hybrid

jasonacox commented 3 weeks ago

Thanks for checking! You are on the same codebase. The ERROR means that it got a Null response from the Powerwall for that call (possibly timeout). The WARNING messages mean that multiple connections were queued up waiting for the Powerwall which went above the 15 limit so it began to prune them.

Do your graphs look like you are seeing a lot of gaps?

jgleigh commented 3 weeks ago

No gaps. I was just wondering why it kept sending so many requests in such a short period of time when the connection buffer was full. Last 24 hours, it hit that connection pool buffer with differing frequency of repeats (2x, 7x, 20x, 14x, 12x). Also seems odd that the number of requests when the buffer is full isn't consistent.

Running 0.10.4 now. 2024-06-11 05:53:14 06/11/2024 05:53:14 AM [proxy] [INFO] pyPowerwall [0.10.4] Proxy Server [t61] - HTTP Port 8675

jasonacox commented 3 weeks ago

Understood. I have an idea. Did you see these before? If not, the difference with the non-TEDAPI version and this one is that it now opens additional connections to the /tedapi API endpoints. This would likely drive up the number of concurrent requests. You could adjust the connection pool size in pypowerwall.env to allow more more concurrent requests:

PW_POOL_MAXSIZE=20

And restart ./compose-dash.sh up -d

I'm see the same on my end so I'm going to increase the connection pool size to see if I can eliminate it.

jgleigh commented 3 weeks ago

Saw it occasionally before, but not consistently like this. Trying 20 as well and we'll see how it goes.

jasonacox commented 3 weeks ago

As a second option...

In my testing in "full" mode (to simulate PW3) where it only uses the /tedapi API, I started to see incomplete or missing payloads resulting in None responses. There is enough error handling that it doesn't matter in the graph but it is logging those. I'm testing call locking to limit potential load caused by concurrent polling. FYI if you want to test:

jasonacox/pypowerwall:0.10.5t61-beta8
jgleigh commented 3 weeks ago

so far so good 🤞

jgleigh commented 2 weeks ago

48 hours and only one error. This is about 1000% better than before!

2024-06-14 06:32:05 06/14/2024 06:32:05 AM [pypowerwall] [ERROR] Failed to get /api/system_status/grid_status

jasonacox commented 2 weeks ago

That's awesome @jgleigh ! I get the same error on occasion. It means that the payload didn't include the ISLAND data or the specific keys. I'm not sure why that happens. I'm going to keep poking at it, but for now, I think this latest pypowerwall version 0.10.5 https://github.com/jasonacox/pypowerwall/pull/102 upgrade is solid.