BetaRavener / uPyLoader

File transfer and communication tool for MicroPython boards
MIT License
355 stars 76 forks source link

Recursive copy / Recursive delete (work in progress) #63

Open zindy opened 5 years ago

zindy commented 5 years ago

Hi all,

my uPyLoader fork can now copy folder trees to the MCU and also delete non-empty folders on the MCU side (using code provided by @JustusW over here, thanks mate!

Unfortunately, when copying large folder trees, something isn't quite right and the process can abort with errors. The MCU replies with the last base64 converted string rather than #0...#⁠4, which may suggest feeding lines to the MCU faster than it can process them. This is a bit annoying, because the only thing I can pinpoint to is "it was working before", when I was using my mangled __upload.py as per @BetaRavener's (using connection.run_file() with upload.py) and it fails now when transferring the same folder but using my new upl module:

            self.send_start_paste()
            self.send_line("import __upl; __upl.upload(\"{}\")".format(file_name))
            self.send_end_paste()

This is a new module I'm working on, which has upload,download,remove methods all under the same roof. If I can't figure it out, I'll come and ask for help! For now, hard reset the MCU, try again.

I also corrupted the filesystem on the MCU at some point during testing, so proceed with caution if you care about your files.

All this suggests a couple of enhancements I need to put in place:

... So basically, I need to implement a context menu for file operations on the MCU side (edit, rename, delete, copy...) and maybe a "MCU operations" menu for things like format, flash, initialize...

Anyway, these are cosmetic, I'll try to work out what I'm doing wrong in my connection._write_steps_job() and report back.

Cheers, Egor

BetaRavener commented 5 years ago

Only thing that I can think of right now with your _upl file is that it's long. Check with diff tool if doesn't have any errors.

Before getting transfer scripts on the MCU, the only way to write them is by sending chunk of code into terminal which writes a file along with the contents of the file. The transfer at this time is unreliable, that's why I kept the files short and better split them into two. If you use this method to initialize _upl on MCU, there's possibility that your file got corrupted.

And yeah, it's entirely possible that communication channel / MCU is not fast enough so I guess you'll also need to try out adding some delays, which are always the funny part as you have to guess them. Anyway, nice progress.

zindy commented 5 years ago

Hi Ivan,

I found why!

Only thing that I can think of right now with your _upl file is that it's long. Check with diff tool if doesn't have any errors.

Actually, I was thinking about that, and this is where / why our code for run_file() differs a bit.

In your case (and please correct me if I'm wrong) you define file_name as a global variable and run your code using exec():

self.send_line(globals_init, "\r")
self.send_line("with open(\"{}\") as f:".format(file_name))
self.send_line("    exec(f.read(), globals())")

which (as far as I understand) means that you have to keep the code small enough for it to load in RAM each time, compile and run. However, after the code is run, no residual memory is used (apart from the global variables).

In my case, I just import __upl and run an appropriate method with file_name as an argument. Which means the whole upl module is kept in RAM once imported, unless deleted. This is great because it is available when it's next needed, but one small issue with my approach is that if upl is modified, the MCU needs to be reset.

So throughout my code, I use a single send_line() command for calling various methods, for example:

self.send_line("import __upl; __upl.remove(\"{}\")".format(file_name)

Actually, I found the code your wrote for uploading and downloading files to be pretty robust, and that wasn't my issue at all.

The issue was OSError 28... No space left on the device. So I am now able to catch that as part of __upl.upload() and will issue a new connection error when it happens (#⁠5): image

I might try and understand how to use os.statvfs("") before writing to the FS. In which case, I would also need to check how much space is used by os.mkdir().

That's all for today!

Cheers, Egor

BetaRavener commented 5 years ago

Ah wow, interesting read. Yes you're correct about the global variable and my intention to contain the life time of the function just for the transfer. But in your case it makes more sense to have it persistently defined and I already considered this approach for other functions like listing the directory tree in the directory support branch - so no harm in there, go for it. :)

BetaRavener commented 5 years ago

Hi Egor (@zindy), could you just give me quick status on your implementation? Would it be eventually possible to merge it or is it still unstable?

zindy commented 5 years ago

Hi Ivan,

I would say the code I wrote is definitely working and could be merged as is. An independent check would be nice to try and uncover any bugs / unwanted behaviour. If I send you a pull request, maybe you could merge it to a temporary branch to let people try it?

Things I changed:

In terms of actually using uPyLoader in a "compile / upload / run / test" loop type workflow, this is where I'm kind of stuck for now. There are still commands to execute and buttons to press that prevent this workflow to be seamless.

For example, I need to break-out of any running loop on the target before I can run my __upl.py. Also, I don't have a filesystem watcher (yet) to monitor file changes on the host and upload these automatically.

Then I was thinking of an option menu to let the user decide what happens when a file is added / modified / deleted on the host:

And finally finally, on the REPL console front... That's going to be a tough one to crack properly. Maybe using an existing in-memory VT emulator could help here? E.g. pyte.

That's all I can think of for now. I'm waiting for a new board and will pick this up when it arrives. We've already talked about the trade-off between creating the perfect development tool and actually using it. I'm happy enough with the state of my branch to use it myself, but simplifying the workflow is definitely something I want to look into before I'm satisfied with my changes :)

Cheers, Egor