Closed ghost closed 8 years ago
r? @maqi
(maidsafe_highfive has picked a reviewer for you, use r? to override)
The DataMap should contain the original file size for decryption.
On Thu, Jan 14, 2016 at 3:15 PM, Maidsafe-Highfive <notifications@github.com
wrote:
r? @maqi https://github.com/maqi
(maidsafe_highfive has picked a reviewer for you, use r? to override)
— Reply to this email directly or view it on GitHub https://github.com/maidsafe/self_encryption/pull/150#issuecomment-171669280 .
David Irvine twitter: @metaquestions blog: http://metaquestions.me
That seems to be fine. But I find the fn prepare_window
quite suspicious...
Hmm, I am in another lib right now perhaps @brian-js and @maqi can help dig into this. Thx very much btw for all the recent commits, really helpful.
When write
is called, it changes self.file_size
, then it calls prepare_window
. prepare_window
calls get_chunk_number
- which uses the new file size - to figure out which chunks to decrypt. But shouldn't it use the original file size?
Furthermore, the line let pos = self.get_start_end_positions(i).0;
uses the new file size also, so that when at the latter for loop the data gets written to the sequencer, it gets written at that position. So the chunk's position in the file used to be self.get_start_end_positions(i)
according to original file size, and now it gets written at get_start_end_positions(i)
according to new file size. I think that's the problem.
Yeah, glad to help, it's fun :)
When write is called, it changes self.file_size, then it calls prepare_window. prepare_window calls get_chunk_number - which uses the new file size - to figure out which chunks to decrypt. But shouldn't it use the original file size?
Yes I suspect you are correct here, the issue to watch is writing while decrypting, I feel you are right here though. Definitely needs investigating. It is a neat wee library for folks.
it seems a two level issue: 1, the chunk status is incorrect. the old datamap contains 3 min sized chunks, the write will put in 1k data, which results in a 3 chunks as well. However, the chunk status doesn't get updated properly, i.e. the status keeps as AlreadyEncrypted, so doesn't get flushed into Storage properly through close() method. However, as the returned data_map now contains new info, this mis-matched link causing decryption error.
2, I carried out a quick check by changing the default chunk_status as ToBeHashed in the new constructor. This time, there is new chunks being flushed to the storage, however, the size is not evenly splitted. The chunks for the first data will normally having size (1045, 1045, 1045), but the chunks for the new one will be like (1387, 1100, 1103)
I raised another PR (https://github.com/maidsafe/self_encryption/pull/152) which containing the new introduced test and the code trying to resolve the exposed issue.
currently closing this one
The test fails, panicking.
I don't know if the network uses the
SelfEncryptor
this way, but I got the impression that you could pass an existing DataMap toSelfEncryptor::new
and then write to the file. But that doesn't work, because the decryption process (and the many helper methods) rely onself.file_size
, which the write call might change before decryption.It's not enough to fix the problem, but I'd make the helper methods free functions, so that the caller would have to pass the file size explicitly and that would remind them what's happening exactly. The functions could then also be very easily unit tested thoroughly (all edge cases...) without creating any
SelfEncryptor
objects.