Closed zombiesonore closed 6 years ago
Greetings. I am also suffering from this bug. I predict my daemon ran for about 5 hours until it hit this error at 1508661058 epoch. I'm no programmer, but I'm assuming the code for committing to the database needs to handle duplicate hashes a little better. I understand that sometimes for the same hash, different nodes will carry different display names for it. I have no source on that, it's just my anecdotal evidence. I have seen it before. Depends on which peers you obtain out of the DHT.
@boramalper Is there a way around this, or are you focusing more on the rewrite right now? I saw https://github.com/boramalper/magnetico/issues/97 that was closed on related issues. Just started trying out this project and ran into this after a few hours as well. Cheers.
I know this is an old issue, but my installation has it too. Just started it yesterday and had to restart magneticod 3 times because of this issue. Running 0.6.0 release via pip3, is there a way to get more recent code where this is fixed or will this never getfixed due to the rewrite in go?
I am completely focused on the go-rewrite right now, and I am working day and night to make sure I can ship it at the end of summer, not later than the first week of September.
I think the problem is there is a race-condition, where between the time we check if a torrent is already in the database and add it to the queue, the torrent we've just checked is already added to the database and thus SQLite fails as there would be duplicate rows.
Now I could have tried to fix this race condition, but I neither have time nor willingness to spend time on the Python version so instead SQLite will gracefully ignore those rows. =)
Very nice project! Just to let you know. After running for more than 48 hours:
Plenty of space on my hard drive. Running on Debian 9 (stable, up to date). Installed last week with pip3. Bye!