vrtmrz / obsidian-livesync

MIT License
3.93k stars 132 forks source link

Syncing often seems to be stuck at read storage processes #372

Closed schemen closed 4 months ago

schemen commented 5 months ago

Thank you for taking the time to report this issue! To improve the process, I would like to ask you to let me know the information in advance.

All instructions and examples, and empty entries can be deleted. Just for your information, a filled example is also written.

Abstract

The sync stops working after some events. I mostly see those icons then: ⏳ Working read storage processes 🛫 Pending read storage processes

I'm not certain yet what the cause is. It happened since 0.22. I have to restart obsidian on both devices and eventually it works, but this seems to be a new stuck behaviour which I am not certain what the cause is.

Expected behaviour

Actually happened

On the device where the change should be pulled, it just seems to be fully ignored. Clicking replicate doesn't do anything. Restarting both clients (several times) will eventually find consistency.

Reproducing procedure

  1. Configure OnEvent syncs.
  2. Change any content/Add any file
  3. Click the replication button on the ribbon.
  4. ⏳ or 🛫 appear
  5. No sync though. Several Obsidian restarts later -> eventual consistency

Please let me know whether you want something specifically tested

schemen commented 5 months ago

It might actually related to #371

vrtmrz commented 5 months ago

Thank you for reporting this issue! I noticed this but thought it could not be that serious a problem, and pended it out. However, I was wrong.

🛫 and ⏳ indicate that detected storage changes have been queued, and are being read. It should not happen after writing to the storage. Normally, we can ignore this if the content is the same. However, this is not possible if there are conflicts in the notes. There was also a problem with the deletion handling. These problems might have combined to cause conflict problems.

Actually, I have not reproduced the conflict problem yet, but v0.22.3 may work for it. May I ask you to check v0.22.3, please?

schemen commented 5 months ago

Hey thanks for the quick feedback!

I have updated both clients to the new version. It's still seems to be somewhat stuck though. Here are the last logs:

1/29/2024, 12:42:47 PM->Looking for the point last synchronized point.
1/29/2024, 12:42:47 PM->Replication activated
1/29/2024, 12:42:47 PM->↑0 (31) ↓10 (LIVE)
1/29/2024, 12:42:47 PM->Replication completed
1/29/2024, 12:42:47 PM->STORAGE <- DB (modify,plain) 02 Private Notes/General/Test sync.md
1/29/2024, 12:42:47 PM->Processing 02 Private Notes/General/Test sync.md (02 Priva:10-4f) 
1/29/2024, 12:42:49 PM->Processing configurations done
1/29/2024, 12:42:49 PM->All files enumerated
1/29/2024, 12:43:01 PM->Content saved:02 Private Notes/General/Test sync.md ,chunks: 3 (new:2, skip:2, cache:1)
1/29/2024, 12:43:01 PM->STORAGE -> DB (plain) 02 Private Notes/General/Test sync.md

The last line here seems where it's stuck, the ⏳ symbol remains. This time, a single restart of obsidian resolves it - on the side that did the change. The side that is pulling doesn't have the issue.

Maybe related:

Since the last update there also seem a big amount of weird logs where it seems to compare linux timestamps maybe? It seems to go through almost the entire vault it seems

1/29/2024, 12:45:51 PM->STORAGE <- DB :ZZ Archive/XXX1.md
1/29/2024, 12:45:51 PM->1699829328 < 1706475568
1/29/2024, 12:45:51 PM->STORAGE <- DB :ZZ Archive/XXX2.md
1/29/2024, 12:45:51 PM->1699829328 < 1706475568
1/29/2024, 12:45:51 PM->STORAGE <- DB :ZZ Archive/XXX3.md
1/29/2024, 12:45:51 PM->1699829328 < 1706475568
1/29/2024, 12:45:51 PM->STORAGE <- DB :ZZ Archive/XXX4.md
1/29/2024, 12:45:51 PM->1699829328 < 1706475568
1/29/2024, 12:45:51 PM->STORAGE <- DB :ZZ Archive/XXX5.md
1/29/2024, 12:45:51 PM->1699829328 < 1706475568

Thank you for your hard work!

schemen commented 5 months ago

It looks like 0.22.4 has stabilized the situation so far!

I'll continue to observe for a moment but otherwise we may be able to close this soon. Thank you!

vrtmrz commented 5 months ago

It is a great relief to hear that! I would love to thank you once again for your patience and cooperation!

schemen commented 5 months ago

Ok, after a few hours and 4 clients of real world usage, it seems to be fixed :) Thank you! I'll close the issue