I've been using uf2conv.py for some bootloader experiments because it's a nice tool with a great command line interface. I noticed it gets very slow (> linearly) with large files. Here is a quick repro (on Python 3.8.5):
$ cd uf2/utils
$ dd if=/dev/urandom of=rand.bin bs=16M count=1
$ time python3 uf2conv.py rand.bin -o rand.uf2
Converting to uf2, output size: 33554432, start address: 0x2000
Wrote 33554432 bytes to rand.uf2
real 4m30.839s
user 1m45.575s
sys 2m44.299s
This patch replaces the repeated concatenation of bytes objects in convert_to_uf2() and convert_from_uf2() with a b"".join():
$ git checkout fix_byte_concat
$ time python3 uf2conv.py rand.bin -o rand.uf2
Converting to uf2, output size: 33554432, start address: 0x2000
Wrote 33554432 bytes to rand.uf2
real 0m0.267s
user 0m0.160s
sys 0m0.089s
I realise that the performance of the Python-based version of the tool might not be of the highest concern, but this is a fairly small change, and makes a big difference.
I've been using
uf2conv.py
for some bootloader experiments because it's a nice tool with a great command line interface. I noticed it gets very slow (> linearly) with large files. Here is a quick repro (on Python 3.8.5):This patch replaces the repeated concatenation of
bytes
objects inconvert_to_uf2()
andconvert_from_uf2()
with ab"".join()
:I realise that the performance of the Python-based version of the tool might not be of the highest concern, but this is a fairly small change, and makes a big difference.