l0b0 / mian

Mine analysis - Graph blocks to height in a Minecraft save game
https://github.com/l0b0/mian/wiki
GNU General Public License v3.0
14 stars 4 forks source link

Working mian with beta format. #9

Closed Fenixin closed 13 years ago

Fenixin commented 13 years ago

First commit. Done manly by copy&pasting from the old mian.

[edit] plus a few improvments

pepijndevos commented 13 years ago

Good work! How is the speed now?

Fenixin commented 13 years ago

Thanks! (also good work for l0b0!)

Not sure about the speed... just finished, tested and went to sleep. I'll try to compare with the old version today.

Fenixin commented 13 years ago

Bad news, the speed is worse than the per-chunk-count version. Is not horrible, but its a bit slower, and it should be faster in theory. I'll try some stuff. tell me if you have some idea for improvements.

pepijndevos commented 13 years ago

Why would it be faster? Extra work needs to be done for extracting the chunks. The speed gain seen in Minecraft is mainly because Windows has bad support for small files, if I understand thing correctly. Are any of you on Windows?

Fenixin commented 13 years ago

I did some profiling (using cProfile) of my branch improve-speed, the first 5 time sorted results are:

Mon Feb 28 13:48:41 2011    mian.prof

         653867 function calls (649820 primitive calls) in 214.948 CPU seconds

   Ordered by: internal time
   List reduced from 2213 to 5 due to restriction <5>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
       39  181.916    4.665  194.041    4.975 mian/mian.py:247(count_blocks)
    35064   12.123    0.000   12.123    0.000 {method 'count' of 'str' objects}
    23804   10.711    0.000   10.711    0.000 {zlib.decompress}
       39    4.103    0.105   17.062    0.437 mian/mian.py:258(extract_region_blocks)
        1    1.804    1.804    2.563    2.563 {built-in method mainloop}

The way to count blocks needs improvements... I'm thinking in the old way, it was really fast...

Fenixin commented 13 years ago

I thought it was going to be faster because we are counting over region files, keeping all the chunks of the region file in the memory, and each region file have up to 1024 chunks, so less calls are made to count...

Just implemented the old way to count the chunks, and now is like 3 times faster... so happy ending. It turns out that dictionaries are much slower than lists for lots of iterations. I'm going to merge and push my last changes with this branch.

l0b0 commented 13 years ago

Ding Thank you very much for the work! Some more fixes are available, and I'll release version 0.9 right nüü.