boramalper / magnetico

Autonomous (self-hosted) BitTorrent DHT search engine suite.
http://labs.boramalper.org/magnetico/
GNU Affero General Public License v3.0
3.06k stars 344 forks source link

sqlite3.IntegrityError #133

Closed zombiesonore closed 6 years ago

zombiesonore commented 7 years ago

Very nice project! Just to let you know. After running for more than 48 hours:

2017-07-25 11:44:36,971  INFO      Added: `###`
2017-07-25 11:44:36,972  ERROR     Could NOT commit metadata to the database! (8762 metadata are pending)
Traceback (most recent call last):
  File "/home/###/.local/lib/python3.5/site-packages/magneticod/persistence.py", line 116, in __commit_metadata
    self.__pending_metadata
sqlite3.IntegrityError: UNIQUE constraint failed: torrents.info_hash

Plenty of space on my hard drive. Running on Debian 9 (stable, up to date). Installed last week with pip3. Bye!

jamesaepp commented 7 years ago

console.log

Greetings. I am also suffering from this bug. I predict my daemon ran for about 5 hours until it hit this error at 1508661058 epoch. I'm no programmer, but I'm assuming the code for committing to the database needs to handle duplicate hashes a little better. I understand that sometimes for the same hash, different nodes will carry different display names for it. I have no source on that, it's just my anecdotal evidence. I have seen it before. Depends on which peers you obtain out of the DHT.

MikeLund commented 6 years ago

@boramalper Is there a way around this, or are you focusing more on the rewrite right now? I saw https://github.com/boramalper/magnetico/issues/97 that was closed on related issues. Just started trying out this project and ran into this after a few hours as well. Cheers.

pieterjanheyse commented 6 years ago

I know this is an old issue, but my installation has it too. Just started it yesterday and had to restart magneticod 3 times because of this issue. Running 0.6.0 release via pip3, is there a way to get more recent code where this is fixed or will this never getfixed due to the rewrite in go?

boramalper commented 6 years ago

I am completely focused on the go-rewrite right now, and I am working day and night to make sure I can ship it at the end of summer, not later than the first week of September.

I think the problem is there is a race-condition, where between the time we check if a torrent is already in the database and add it to the queue, the torrent we've just checked is already added to the database and thus SQLite fails as there would be duplicate rows.

Now I could have tried to fix this race condition, but I neither have time nor willingness to spend time on the Python version so instead SQLite will gracefully ignore those rows. =)