Closed rishiloyola closed 7 years ago
@vkuznet First I am fetching all the files of requested data set from the source DB. After that, I am fetching the files of the same dataset from the destination agent. Then, I am storing destination's file name as a key and hash as its value in a separate map. This will help us to reduce the time complexity. The overall time complexity will be the O(N * m) where m is the search complexity of map. In many cases, it will be constant.
Very soon I will add the unit tests. Can you suggest some better approach for this?
@vkuznet I just added the test. Also, I did the benchmark of it. It is working fine. Checkout screen shot.
ok, good, please fix %s in printouts of the test.
On 0, Rishi notifications@github.com wrote:
@vkuznet I just added the test. Also, I did the benchmark of it. It is working fine. Checkout screen shot.
-- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/vkuznet/transfer2go/pull/41#issuecomment-323185344
Already, fixed that. Check out this commit. You can merge this.
Rishi, how this code will behave if agent will need to compare 1k-10k records? My understanding you want to compare dataset request with another agent. A dataset may contains 10k files. That's why I asked. Can you add unit/integration test to verify that both comparison function and request one will sustain a request with 10k records?