pimoroni / enviro

MIT License
105 stars 85 forks source link

General clean-up & fixes #54

Closed waveform80 closed 2 years ago

waveform80 commented 2 years ago

Firstly, my apologies for stuffing a load of things in one PR but I've been working through the enviro code while playing around with the grow and consolidated various "miscellaneous" commits into this cleanup branch.

I've separated each change into its own commit so do feel free to close the whole PR if you want none of it, or if you only want certain bits I'm happy to rebase and cut out the bits you don't want. Some changes (in particular json.dump in 609b112 and the write_config changes in edf5579) are arguable. They'll use less memory but more flash I/O (personally I've found this a good trade-off in micropython in the past but you may want something different).

I've tested the changes on an Enviro Grow board (including thru provisioning), and they appear okay so far, but I haven't tried them on anything else yet.

lowfatcode commented 2 years ago

Many thanks for this!

I've picked through these and combined the relevant fixes into a single commit here: https://github.com/pimoroni/enviro/commit/cebf8144a1a89fbe5f803a0886b5d6b85d40ff3e

A couple of comments:

Handle corrupted config with missing "provisioned" This change stomps on the value of needs_provisioning which is used to also determine if the user requested provisioning. As an alternative I've made the except: block catch all errors - really we need the import and provisioned flag read to succeed or there is a problem.

All cases are mutually exclusive, no need to evaluate them all Logically of course I agree :-) However stylistically I slightly prefer my way as it presents all the cases identically (there is no priority implied for example) and has consistent formatting. There is no meaningful performance hit.

Change write_config to operate line-by-line I specifically avoid line by line IO here because it causes awful performance over remote mounts (with mpremote/pyboard.py) - it can take 10s of seconds to process the config file that way (wild i know!). The config file is of a known size and I don't have concerns of it become too large for this approach but if it did there are other ways to handle it such as reading the file in chunks of say 1kB at a time.

waveform80 commented 2 years ago

Many thanks for this!

No problem, it's a lovely little board!

I've picked through these and combined the relevant fixes into a single commit here: cebf814

A couple of comments:

Handle corrupted config with missing "provisioned" This change stomps on the value of needs_provisioning which is used to also determine if the user requested provisioning. As an alternative I've made the except: block catch all errors - really we need the import and provisioned flag read to succeed or there is a problem.

Ah, I'd missed that case. Mostly I was concerned that the NameError case (i.e. a config that, for whatever reason, lacks provisioned would cause a crash at that point), but I hadn't considered the "requested" scenario.

All cases are mutually exclusive, no need to evaluate them all Logically of course I agree :-) However stylistically I slightly prefer my way as it presents all the cases identically (there is no priority implied for example) and has consistent formatting. There is no meaningful performance hit.

Ah, fair enough :)

Change write_config to operate line-by-line I specifically avoid line by line IO here because it causes awful performance over remote mounts (with mpremote/pyboard.py) - it can take 10s of seconds to process the config file that way (wild i know!). The config file is of a known size and I don't have concerns of it become too large for this approach but if it did there are other ways to handle it such as reading the file in chunks of say 1kB at a time.

Interesting; I must get around to packaging mpremote for apt at some point.

My current workflow is to use rshell (which I managed to get into the apt archives in time for the jammy release), in particular its excellent rsync command. It's a very basic thing compared to "real" rsync, but is still significantly faster at just copying files I've changed from my Pi to the attached Pico.