Open flyingOwl opened 7 months ago
Yes Yes Yes! Missed scrobbles do bother me too. I was thinking about saving every scrobble to the database (exactly as you mentioned), then send them in background, deleting successfully sent scrobbles from a table and marking as failed ones that failed with as precise as possible reason (i.e. backtrace at least). And to have a screen with representation of failed scrobbles and ability to attempt to resend them or cleanup (i.e. you understand the problem and manually scrobbled somewhere upstream). Also, in offline mode the app doesn't store "playing now" scrobbles (without submission=true), I am using them for my recommendation system to detect tracks I skip.
There is also an edge case when you've queued a song while being in offline mode, then went to online and still playing song (and whole queue) have offline mode identifies (pathes to cached songs in filesystem are used), so it attempts to scrobble song with incorrect for server identifier and fails naturally.
Anyway, I think the approach with designated scrobbles storage is viable to cover both your use cases and mine.
They are stored in SharedPreferences either with an ID or the search criteria. It was not really made to do more than it does now.
Hopefully, I fixed use lucene clause with hope to get song id when back online approach in https://github.com/paroj/DSub2000/commit/c36328c1c91b077b93e32b46101d42121fde55dd
BTW, I attempted to start to implement that scrobbles table approach (https://github.com/paroj/DSub2000/commit/2e774aae620f629799831936b9cd78814d39c060), but decided to postpone it until we get this in fdroid. Anyway, would like to hear what you think about it all and what paroj thinks about this subject.
Using a dedicated table for this kind of caching sounds good then!
I would also include "star" and "unstar" in this table, as it is already implemented in the same way. And maybe also "rating" and "now-playing scrobble". So a table could look like: | server_id | song_id | action | timestamp | data |
---|---|---|---|---|---|
server key/id/... | song identifier | "scrobble", "rating", "star", ... | timestamp, probably only relevant for scrobbling | rating: 1-5, star: true/false, scrobble submission=true/false ... maybe as json |
Any request during offline mode should be added there and also every failed request in online mode. Maybe the next scrobble (or rating, ...) that is successful should also try to sync the table with the server. Maybe another column "error" might be useful, to distinguish between connection failures, offline mode and server errors (when a song could not be found but the app send everything correctly, ...). This way, only updates that have failed due to a connection error are automatically resent.
This table could be displayed in a new Activity "Pending metadata sync" (or whatever). Just a list with the song name and action, a info button that shows the reason and a button to retry the synchronization.
Maybe then it worth to generalize to conceptually something like failed web queries we really want to deliver with following structure | server_id | endpoint | timestamp | query_body |
---|---|---|---|---|
server key | setRating/createBookmark/scrobble | timestamp, to display in UI only | as json |
Additionally, we can ditch server_id property and save full url as endpoint. Need to keep in mind support for alternative server url, though.
A more generalized table with endpoint and body data is a good idea! Though, I would add a column that states the reason why this entry is in this table (e.g. connection error, offline mode, ...)
By adding the full endpoint URL to the table, the instance that generates this entry must translate the action to an URL. This is currently the knowledge of the "RESTMusicService". And I would argue, that only this class should be able to translation an action to the URL. But this would mean that the "OfflineMusicService" is not able to generate an entry for this table.
Following this approach, we probably need to rework the MusicService hierarchy in a way that the RESTMusicService is the main service and the OfflineMusicService and the CachedMusicService are refactored to a caching mechanism. So there is only one MusicService left which handles caching, offline mode and connection errors / retries on its own.
I took a good look at the code, and we probably can't store the full URL in the table because it will contain the users credentials. But right before they are added to the request seems like a good spot to start this feature:
I'll see how far I get with the ideas in my head. Unfortunately, this requires quite a bit of restructuring in the code. But this way, the current implementation is kept mostly in its place. I don't have a fully elaborated plan yet. There are also several thousand lines of code in these files. But hopefully I will figure it out.
I connect to my navidrome server and most of the time I want to play a smart playlist generated on the server. This playlist highly depends on the "last played" field. So, scrobbling is an essential part of playing the songs. Each song that is not scrobbled will mess up the playlist and make me unhappy.
With the current behavior of the app, I can achieve full scrobbling in two ways:
This has some problems:
I did some experiments on my fork but I was not happy with the complexity it would add. My idea was to use the OfflineMusicService as a fallback when the RESTMusicService failed to sync a scrobble (and star/unstar/rate/...) call. But the scrobble caching gave me a headache. They are stored in SharedPreferences either with an ID or the search criteria. It was not really made to do more than it does now.
In my opinion, the SharedPreferences are not suitable for what I want. I would suggest a simple database. But before I make my hands dirty, I would love to hear some other thoughts, wishes or maybe some ideas or solutions.