Open 1icengineer opened 2 months ago
After setting credentials back to fleet api and authoizing via tesla app I get these results
Still last trip info is not updated and stuck at July 7, but vehicle info shows at least current odometer info:
Logfile errors:
I have the same issue here. Items like the car status and battery SOC are updating, but in the Timeline the last trip I see was from 9th July, it has been stuck there for 9 days now. Updated to the latest version, restarted the docker containers, but no change.
Same problem to me. Data are not updated since Jul 8th, I'm running 1.58.8.0 Main page says "Update available: 1.59.1.0" , but the auto update has not started yet.
Tesla changed the API. Please make sure you are using at least 1.59.0 if the update doesn't work for any reason, please try this:
https://github.com/bassmaster187/TeslaLogger/blob/master/docs/en/faq.md#update-doesnt-work-anymore
I thought I was running 1.59.1.0, at least that is what the admin page indicated, but I forced a docker compose build
to make sure the image was updated, because just doing a docker compose pull
does not update the images.
After that, I went for a quick drive just to check, saw the icon in the display indicating that my data was being downloaded via the app, and now my Timeline is updating again.
Update does not work, there is a failure during cloning; Object count does not match, should 26608 but only 2943 are retrieved. Here's the update.sh output:
root@raspberry:/etc/teslalogger# ./update.sh git version 2.20.1 Cloning into 'TeslaLogger'... remote: Enumerating objects: 26608, done. remote: Counting objects: 100% (2943/2943), done. remote: Compressing objects: 100% (1001/1001), done. error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: CANCEL (err 8) fatal: the remote end hung up unexpectedly fatal: early EOF fatal: index-pack failed cp: cannot stat 'TeslaLogger/TeslaLogger/bin/': No such file or directory cp: cannot stat 'TeslaLogger/TeslaLogger/www/': No such file or directory
error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: CANCEL (err 8) fatal: the remote end hung up unexpectedly
That is a problem with your connection to GitHub.
I suggest you keep repeating
git fetch
git reset --hard origin/master
git checkout origin/master
Until you have pulled the full update.
Thanks! As the latins used to say, "repetita iuvant"
Seems the update to 1.59.1.0 is breaking it some more:
sudo git reset --hard origin/master -> HEAD is now at e0596df4 Table doesn't exist exception
Seems the update to 1.59.1.0 is breaking it some more:
sudo git reset --hard origin/master -> HEAD is now at e0596df Table doesn't exist exception
That is the actual commit message and perfectly normal.
"table doesn't exist exception" is the not-so-normal thing about it.... got some more hints and corrected the issue here:
sudo git checkout master Already on 'master' Your branch is behind 'origin/master' by 20 commits, and can be fast-forwarded. (use "git pull" to update your local branch) ~/TeslaLogger$ sudo git pull Updating 2ba9e91f..e0596df4 Fast-forward TeslaLogger/Car.cs | 19 ++-- TeslaLogger/DBHelper.cs | 21 +++-- TeslaLogger/NearbySuCService.cs | 362 ++++++++++++++++++------------------------------------------------------ TeslaLogger/Properties/AssemblyInfo.cs | 4 +- TeslaLogger/TelemetryConnection.cs | 122 +++++++++++++++++------- TeslaLogger/TeslaAPIState.cs | 27 ++---- TeslaLogger/WebHelper.cs | 121 ++++++++++++------------ TeslaLogger/WebServer.cs | 1 + TeslaLogger/bin/TeslaLogger.exe | Bin 939520 -> 935936 bytes TeslaLogger/bin/changelog.md | 5 +- TeslaLogger/bin/geofence.csv | 9 +- TeslaLogger/www/admin/index.php | 2 +- UnitTestsTeslalogger/UnitTestBase.cs | 13 +++ 13 files changed, 302 insertions(+), 404 deletions(-)
Now in 1.59.1.0, again giving it new credentials, I just noticed some interesting note in the log:
18.07.2024 16:28:15 : Table: alert_names data: 0 MB index: 0 MB rows: 2 18.07.2024 16:28:15 : Table: superchargerstate data: 3 MB index: 0 MB rows: 58437 18.07.2024 16:28:15 : Housekeeping: database.mothership older than 14 days count: 783 minID:2375001 maxID:2375783 18.07.2024 16:28:15 : UpdateCO2 18.07.2024 16:28:17 : UpdateCO2 finish 18.07.2024 16:28:17 : RunHousekeepingInBackground finished, took 1158.29ms 18.07.2024 16:28:35 : #1[Car_1:19]: GetVehicles Error: A task was canceled.
18.07.2024 16:29:16 : #1[Car_1:19]: GetVehicles Error: A task was canceled.
18.07.2024 16:29:57 : #1[Car_1:19]: GetVehicles Error: A task was canceled.
when re-importing last backups from yesterday with docker new built, I notice these entries in the log, which seem to indicate some data from the past 2 weeks gets deleted. why is this happening?
19.07.2024 09:07:42 : Table: superchargerstate data: 3 MB index: 0 MB rows: 60947 19.07.2024 09:07:42 : Housekeeping: database.mothership older than 14 days count: 3358 minID:2375001 maxID:2378358 19.07.2024 09:07:42 : Housekeeping: delete database.mothership chunk 2376001 19.07.2024 09:07:43 : Housekeeping: delete database.mothership chunk 2377001 19.07.2024 09:07:44 : Housekeeping: delete database.mothership chunk 2378001 19.07.2024 09:07:45 : Housekeeping: OPTIMIZE TABLE mothership
when re-importing last backups from yesterday with docker new built, I notice these entries in the log, which seem to indicate some data from the past 2 weeks gets deleted. why is this happening?
19.07.2024 09:07:42 : Table: superchargerstate data: 3 MB index: 0 MB rows: 60947 19.07.2024 09:07:42 : Housekeeping: database.mothership older than 14 days count: 3358 minID:2375001 maxID:2378358
The table mothership is just for debugging purpose. You can see how fast the responses are from tesla servers and how many problems we had.
Thanks for your input and for all the work you are doing trying to follow-up or correct the things getting broken on Teslas side!!
Righ now, I have the car reconfigured again with non Fleet-API cedentials and I can see the location on the admin screen flip-flopping between two recent locations from drives earlier today.
Updated to 1.59.2.0 and still no update of last trip and timeline in grafana since July 09 - maybe my database is broken.
Only non-Fleet-API access is wirkong here, with Fleet-API there is no update whatsoever and err msg in the log:
whats also different from before: status is showing long periods of being offlinevs before:
So this morning I just "updated" again from 1.59.2.0 to 1.59.2.0, ie same release before and after, but after that one teslalogger started recording again data from my drives, after a break of ~1200km
Whatever that was, now it seems fixed:
However, consuption infomation is still missing and not recorded:
Works for me with 1.59.1. Thank you all for the bug fixing effort!!!!
Back to normal now, issue seems fixed:
Append Logfile you find it here: \raspberry\teslalogger\nohup.out
Describe the bug For 2 weeks, trips only show in grafana "Visited" if at all. No update elsewhere in "trips" display and such since July 9 Battery data showing 57% / 95km range even though car was just supercharged to full and now has 89% / 362km When checking teslalogger while driving, I can see it updating the position/map, vehicle state etc.
To Reproduce happened after recent updates to 1.59.0.0, around the time when upgrading /downgrading between 1.59 and 1.58 happened.
Expected behavior Expeted to see accurate recording of all trips just as before
Screenshots
Log snippet with fleet api credentials seeem okay:
Further notes Grafana visited working again since 2 days, but trip history / charging history dont update since July 09: Consumption data from today July 17 showing incomplete.
Teslalogger Type teslalogger in Docker on google vm on Ubutntu 20.4.6.LTS
Client: Docker Engine - Community Version: 27.0.3 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.15.1 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.28.1 Path: /usr/libexec/docker/cli-plugins/docker-compose logfile.zip
Did you update the Image with apt-get update & upgrade?
Do you use Tasker or iBeacon no
Additional context did a reset of credentials to fleet-api and without fleet-api, no effect wrt this issue.