jlouis / etorrent

Erlang Bittorrent Client
BSD 2-Clause "Simplified" License
294 stars 50 forks source link

The fast resume table is not updated. #116

Closed jlouis closed 13 years ago

jlouis commented 13 years ago

In particular, the fast_resume table contains a bitfield which is always 0. This is a regression on fast resuming after a restart of the client. The problem is that etorrent_torrent_ctl uses this state to skip over checking if present. It is present, always, with a state that "we have nothing". So when you restart etorrent, etorrent assumes that all torrents are not downloaded at all.

jlouis commented 13 years ago

Some investigation:

When etorrent_torrent_ctl starts up, it will try to load its piece state off of the fast resume table. The idea is to blindly trust the table if it is there. If it is not, we want to check the individual pieces to build up the table. That is the goal anyway. The problem is that we call etorrent_piece:to_list/1 which will give us a list of indexes, say [0, 7, 13, 165] of the piece indexes we have. We then proceed to check only these indexes. Thus if the table is either empty, or is populated with all 0'es like it is, then nothing is checked and we will never ever check for data. That is rather bad.

The other problem here is that we store all 0'es. So we don't really store the right kind of state in the fast resume table in the first place and thus we can't rely on the table when we want to resume. This is the second problem.

jlouis commented 13 years ago

I have a fix for part 1. I am working on part 2.

jlouis commented 13 years ago

The 2nd problem is this: We call etorrent_torrent_ctl:valid_pieces/1 to figure out which pieces are valid. But this value in etorrent_torrent_ctl is only set at startup and never updated. So the piece set of valid pieces is never updated as intended and the internal state is wrong. This means we will always store an all-zero entry more or less and that accounts for a number of problems with respect to piece states.

jlouis commented 13 years ago

This has been fixed by a commit.