openUC2 / UC2-Software-GIT

This repository hosts all necessary software for the UC2 project.
https://useetoo.org
Other
12 stars 6 forks source link

Berryconda installation 404 and confusion over pip usage #13

Closed AlecVercruysse closed 4 years ago

AlecVercruysse commented 4 years ago

Was hoping to install the GUI to get a minimal example of how you use MQTT to communicate with the Z-stage esp32 and the led matrix esp32. Along the way, I've run into some issues:

In the UC2-env section of the install guide in the GUI/RASPBERRY_PI readme, all the wget commands (and some hyperlinks the readme) return 404 since the folder is not found.

It looks like this is just an artifact of moving the SCRIPTS/ folder to be a subdir of /GUI/RASPERRY_PI?

If these scripts are only to install berryconda, may I propose that the recommendation of using berryconda be dropped. It seems to be that berryconda will no longer be actively maintained (according to jjhelmus/berryconda#83). It already does misses python 3.7 support (jjhelmus/berryconda#40).

This install could be made much simpler, in my opinion, by including a simple requirements.txt file and advising users to use the built-in venv module (python3 -m venv) to manage their virtual environments. Did berryconda bring anything else to the table other than some pre-compiled scientific packages?


Along these same lines, I believe that the python -m pip install --user either circumvents the venv completely (installing in ~/.local/lib) or fails on purpose, since the --user option specifies a specific install location outside of a venv. The previous behavior is the result of a recently-closed bug: pypa/pip#5702. The fix for this was released early this year so it is very likely that previous installations using this guide depended on that.

Since venvs specify their own path for pip, the reason python -m pip was recommended in the first place (to circumvent ambiguous paths) is no longer really an issue either, and, in my opinion, it is safe to just use the simplest version of the command: pip install. Am I missing another explicit reason for this method of installing files?

Please let me know if you recommend I submit a PR for this (I'm still in the process of dealing with the kivy build to set up the GUI, so I would have to wait a bit until I'm sure everything works).

renerichter commented 4 years ago

Dear Alec, thank you very much for email and your try to use our GUI. Regarding your points:

Was hoping to install the GUI to get a minimal example of how you use MQTT to communicate with the Z-stage esp32 and the led matrix esp32. Along the way, I've run into some issues:

Is this working now? If not, shall I send you a short script for communication with the devices directly from command-line?

In the UC2-env section of the install guide in the GUI/RASPBERRY_PI readme, all the wget commands (and some hyperlinks the readme) return 404 since the folder is not found. It looks like this is just an artifact of moving the SCRIPTS/ folder to be a subdir of /GUI/RASPERRY_PI?

We should fix that!

If these scripts are only to install berryconda, may I propose that the

recommendation of using berryconda be dropped. It seems to be that berryconda will no longer be actively maintained (according to jjhelmus/berryconda#83 https://github.com/jjhelmus/berryconda/issues/83). It already does misses python 3.7 support (jjhelmus/berryconda#40 https://github.com/jjhelmus/berryconda/pull/40).This install could be made much simpler, in my opinion, by including a simple requirements.txt file and advising users to use the built-in venv module (python3 -m venv) to manage their virtual environments. Did berryconda bring anything else to the table other than some pre-compiled scientific packages?

You nailed it to the point. We use berryconda for the precompiled libraries which, in during our testing, performed well enough and hence we did not have to invest the rather heavy on system-compilation times for eg scipy. Still you have a point there. I read through the thread and noted that some people are suggesting to install a 64bit OS on Raspi4 to use miniforge ( https://github.com/conda-forge/miniforge/ ). The 64bit Raspbian still seems to have several issues ( https://www.cnx-software.com/2020/06/21/checking-out-raspberry-pi-os-64-bit-on-raspberry-pi-4-8gb-ram/ ) so which is why we could go with Ubuntu-Mate ( eg https://homenetworkguy.com/how-to/install-ubuntu-mate-20-04-lts-on-raspberry-pi-4/ ). Yet, for our purposes the available libraries with berryconda are sufficient and we hope that there will not arise too many security issues or major bugs soon...^^

Along these same lines, I believe that the python -m pip install --user either circumvents the venv completely (installing in ~/.local/lib) or fails on purpose, since the --user option specifies a specific install location outside of a venv. The previous behavior is the result of a recently-closed bug: pypa/pip#5702 https://github.com/pypa/pip/issues/5702. The fix for this was released early this year so it is very likely that previous installations using this guide depended on that.Since venvs specify their own path for pip, the reason python -m pip was recommended in the first place https://bugs.python.org/issue22295 (to circumvent ambiguous paths) is no longer really an issue either, and, in my opinion, it is safe to just use the simplest version of the command: pip install. Am I missing another explicit reason for this method of installing files?

We had it like you pointed it out as simple as possible before, but went back to a more secure version using python -m pip. The --user option came in lately and is irrelevant and can be ignored. Still I think that releasing python -m pip at all builds on the idea that the most recent pip is installed on the system, which will not be the case for using berryconda. Hence, on longterm we should shift to 64bit OS with eg mini-forge and then we should use the simplified commands.

Please let me know if you recommend I submit a PR for this (I'm still in the process of dealing with the kivy build to set up the GUI).

I am always happy to see a PR with fixes for the mentioned content.

Thank you for pointing all these things out. If you have any questions about anything regarding the Soft- and Hardware Code of the UC2-System (eg how is MQTT implemented etc) than let me directly know. :)

Best René

Am Di., 30. Juni 2020 um 23:54 Uhr schrieb Alec notifications@github.com:

Was hoping to install the GUI to get a minimal example of how you use MQTT to communicate with the Z-stage esp32 and the led matrix esp32. Along the way, I've run into some issues:

In the UC2-env section of the install guide in the GUI/RASPBERRY_PI readme, all the wget commands (and some hyperlinks the readme) return 404 since the folder is not found.

It looks like this is just an artifact of moving the SCRIPTS/ folder to be a subdir of /GUI/RASPERRY_PI?

If these scripts are only to install berryconda, may I propose that the recommendation of using berryconda be dropped. It seems to be that berryconda will no longer be actively maintained (according to jjhelmus/berryconda#83 https://github.com/jjhelmus/berryconda/issues/83). It already does misses python 3.7 support (jjhelmus/berryconda#40 https://github.com/jjhelmus/berryconda/pull/40).

This install could be made much simpler, in my opinion, by including a simple requirements.txt file and advising users to use the built-in venv module (python3 -m venv) to manage their virtual environments. Did berryconda bring anything else to the table other than some pre-compiled scientific packages?

Along these same lines, I believe that the python -m pip install --user either circumvents the venv completely (installing in ~/.local/lib) or fails on purpose, since the --user option specifies a specific install location outside of a venv. The previous behavior is the result of a recently-closed bug: pypa/pip#5702 https://github.com/pypa/pip/issues/5702. The fix for this was released early this year so it is very likely that previous installations using this guide depended on that.

Since venvs specify their own path for pip, the reason python -m pip was recommended in the first place https://bugs.python.org/issue22295 (to circumvent ambiguous paths) is no longer really an issue either, and, in my opinion, it is safe to just use the simplest version of the command: pip install. Am I missing another explicit reason for this method of installing files?

Please let me know if you recommend I submit a PR for this (I'm still in the process of dealing with the kivy build to set up the GUI).

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bionanoimaging/UC2-Software-GIT/issues/13, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXLFM3LQM4QEXDHJ63DFMDRZJNKFANCNFSM4OMX2B7A .

--

Phd-stud. René Lachmann
R182, FA8, Biomedical Imaging
Leibniz Institute of Photonic Technology,
Albert-Einstein Str. 9, 07745 Jena, Germany
Tel.: +49 (0) 3641 206 426
AlecVercruysse commented 4 years ago

Sounds good, I'll start a PR that fixes the paths and removes --user from the pip options.

It makes sense to stick with berryconda for now until 64 bit raspian becomes more mainstream.

I would love a cli script that shows communication from the command line. Ultimately, since my team is building an FPM setup and would like to pursue fast computation, I think the pi will be relegated to sending the photos to a more powerful computer, and communicating with the matrix/z-stage over mqtt, rather than hosting a GUI. We will make sure to watch for updates in the documentation of the GUI however!

renerichter commented 4 years ago

Dear Alec,

I would love a cli script that shows communication from the command line. Ultimately, since my team is building an FPM setup and would like to pursue fast computation, I think the pi will be relegated to sending the photos to a more powerful computer, and communicating with the matrix/z-stage over mqtt, rather than hosting a GUI. We will make sure to watch for updates in the documentation of the GUI however!

I pushed a small script to display the usage of the MQTT commands via using our standard MQTTDevice class and a testing class. Please find it here:

https://github.com/bionanoimaging/UC2-Software-GIT/tree/master/HARDWARE_CONTROL/ESP32/GENERAL/MQTTtest

I agree that there could be more comments documenting the code properly, but I was in a hurry to make it work for you so you can continue. I didn't describe it in the MD-files of GITHUB yet as well. If you have further questions please let me know. We already included a FPM-mode into our GUI, but as we used it the last time before many many updates I think it might right now not work as expected. So if I understand right you want to take all images with RASPI and then send it to another PC for processing. Do you:

When we wrote the GUI, we made many awfully wrong design-choices and have revamped the whole concept now, making the whole toolbox (in the end) fitting to whatever GUI. The release of this new version will take quite a while though as we right now not have enough man-power to realize the finished code-designs. Hence: If we can make the GUI work for your purpose I would be very pleased. Maybe we can find away that your time tries to simplify installation while we try to make your imaging work from the start?

I further commented on your PR and am looking forward to your suggestions. :)

All the best René

Am Mi., 1. Juli 2020 um 21:25 Uhr schrieb Alec notifications@github.com:

Sounds good, I'll start a PR that fixes the paths and removes --user from the pip options.

It makes sense to stick with berryconda for now until 64 bit raspian becomes more mainstream.

I would love a cli script that shows communication from the command line. Ultimately, since my team is building an FPM setup and would like to pursue fast computation, I think the pi will be relegated to sending the photos to a more powerful computer, and communicating with the matrix/z-stage over mqtt, rather than hosting a GUI. We will make sure to watch for updates in the documentation of the GUI however!

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bionanoimaging/UC2-Software-GIT/issues/13#issuecomment-652603649, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXLFMZPIDIGMV7RQ3YJ5YLRZOESZANCNFSM4OMX2B7A .

--

Phd-stud. René Lachmann
R182, FA8, Biomedical Imaging
Leibniz Institute of Photonic Technology,
Albert-Einstein Str. 9, 07745 Jena, Germany
Tel.: +49 (0) 3641 206 426
AlecVercruysse commented 4 years ago

Thanks for the script! I will use it ASAP to start understanding your communication protocol.

I don't mean to rush the GUI documentation, I see that it is marked as something that will be added at one point in the readme!

For our application, we hope to at one point be able to take (and transfer) images quickly to our processing server from a headless rpi. Since we are using the raspberry pi HQ camera, we are actually planning on processing the Bayer data from the sensor (as seen in Aidukas, T., Eckert, R., Harvey, A.R. et al. Low-cost, sub-micron resolution, wide-field computational microscopy using opensource hardware.)

Ideally, we would quickly transfer our images as quickly as possible to be saved on the main PC. I believe that our first implementation will use picamera and stream the images to a BSDsocket, as is described in the picamera docs.

The computation program on the server will most likely start out looking like a script ran from the cli, but later would ideally turn into a simple flask app that allows for a simple GUI via web browser.

I am a bit confused by your request. What work needs to be done on the new application? I would be happy to help contribute to that. If not, I would also be glad to help try to simplify install instructions. I do believe that as long as kivy is included as a dependency of your older version of the GUI, the install will unfortunately be an arduous process. Additionally, I do get a lot of crashes when I try the GUI but I will attempt another clean reinstall with conda and see if that fixes anything.

I apologize, but I see no comments on my PR except my own. Is your comment elsewhere?

renerichter commented 4 years ago

Dear Alec, I hope the script serves the purpose. I released the PR finally. Thank you again for the good suggestions.

For our application, we hope to at one point be able to take (and transfer) images quickly to our processing server from a headless rpi. Since we are using the raspberry pi HQ camera, we are actually planning on processing the Bayer data from the sensor (as seen in Aidukas, T., Eckert, R., Harvey, A.R. et al. Low-cost, sub-micron resolution, wide-field computational microscopy using opensource hardware https://doi.org/10.1038/s41598-019-43845-9.)

Ideally, we would quickly transfer our images as quickly as possible to be saved on the main PC. I believe that our first implementation will use picamera and stream the images to a BSDsocket, as is described in the picamera docs https://picamera.readthedocs.io/en/latest/recipes1.html#capturing-to-a-network-stream .

As we did not use the HQ-camera yet I would love to hear from your experiences on that. Taking the unprocessed Bayer-data is an option that we have in our GUI as well, but for the active state we do not have an implementation where GUI is separate to a controlling program. This is the very point we are working on so that eg installing our basic-toolset and adding your own GUI (if even necessary) becomes easier. The questions regarding your image-transfer are:

Ideally, we would quickly transfer our images as quickly as possible to be

saved on the main PC. I believe that our first implementation will use picamera and stream the images to a BSDsocket, as is described in the picamera docs https://picamera.readthedocs.io/en/latest/recipes1.html#capturing-to-a-network-stream .

How about using HTTP-streaming or webRTC for/with that? Do you have any experience in this area? We are right now delving a bit into this.

I am a bit confused by your request. What work needs to be done on the new

application? I would be happy to help contribute to that. If not, I would also be glad to help try to simplify install instructions. I do believe that as long as kivy is included as a dependency of your older version of the GUI, the install will unfortunately be an arduous process. Additionally, I do get a lot of crashes when I try the GUI but I will attempt another clean reinstall with conda and see if that fixes anything.

Can you expand on the crashes and eventually open new issues for necessary fixes? For us we see no problems with the installations at all, but of course we made many experiences until this state...:')

I apologize, but I see no comments on my PR except my own. Is your comment elsewhere?

Fixed now and merged.

Best René

Am Do., 2. Juli 2020 um 23:47 Uhr schrieb Alec notifications@github.com:

Thanks for the script! I will use it ASAP to start understanding your communication protocol.

I don't mean to rush the GUI documentation, I see that it is marked as something that will be added at one point in the readme!

For our application, we hope to at one point be able to take (and transfer) images quickly to our processing server from a headless rpi. Since we are using the raspberry pi HQ camera, we are actually planning on processing the Bayer data from the sensor (as seen in Aidukas, T., Eckert, R., Harvey, A.R. et al. Low-cost, sub-micron resolution, wide-field computational microscopy using opensource hardware https://doi.org/10.1038/s41598-019-43845-9.)

Ideally, we would quickly transfer our images as quickly as possible to be saved on the main PC. I believe that our first implementation will use picamera and stream the images to a BSDsocket, as is described in the picamera docs https://picamera.readthedocs.io/en/latest/recipes1.html#capturing-to-a-network-stream .

The computation program on the server will most likely start out looking like a script ran from the cli, but later would ideally turn into a simple flask app that allows for a simple GUI via web browser.

I am a bit confused by your request. What work needs to be done on the new application? I would be happy to help contribute to that. If not, I would also be glad to help try to simplify install instructions. I do believe that as long as kivy is included as a dependency of your older version of the GUI, the install will unfortunately be an arduous process. Additionally, I do get a lot of crashes when I try the GUI but I will attempt another clean reinstall with conda and see if that fixes anything.

I apologize, but I see no comments on my PR except my own. Is your comment elsewhere?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bionanoimaging/UC2-Software-GIT/issues/13#issuecomment-653233924, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXLFM4CJ4UQHXN3OJGFQRDRZT56VANCNFSM4OMX2B7A .

--

Phd-stud. René Lachmann
R182, FA8, Biomedical Imaging
Leibniz Institute of Photonic Technology,
Albert-Einstein Str. 9, 07745 Jena, Germany
Tel.: +49 (0) 3641 206 426
beniroquai commented 4 years ago

@AlecVercruysse Did you consider using a Nvidia Jetson instead of the Pi? I could imagine, since it can run openCV on GPU and the Picamera can also be used, it represents a vivid alternative. I made good some good experiences with it.

AlecVercruysse commented 4 years ago

@renerichter, we are currently still trying to work all these parameters out. Imaging speed is not (yet) a big concern to us, but since we plan to experiment with the algorithms in the paper I referenced in my last comment, we need to work with completely unprocessed bayer data.

We use http streaming, with the very useful eLinux RPi-Cam-Web-Interface app, to access our camera as of now. It works well to do the basic imaging and testing right now, but I have yet to see if it easily modifiable to set custom settings (e.g. exposure time) and provide raw bayer data with http requests.

I get a consistent crash with the GUI every time I click "start experiment," but I admit since I do not know how to use the GUI at all, that could be a user mistake. I plan to look into it later when I hook up a monitor/mouse/kb to the pi. The esp32 code all worked very well, except I switched from the adafruit noepixel libraries to a FastLED port, due to the issues encountered when disabling interrupts due to the bit-banging behavior of the adafruit lib (described in adafruit/Adafruit_NeoPixel#139). This helped improve stability on my esp32, if you experience similar issues (it particularly affected my ability to connect to wifi/the mqtt server), I can submit a PR for the fix.

@beniroquai the Nvidia Jetson looks promising, but I think that ultimately since processing FPM can take almost half an hour on a core i7 (according to the original paper), it is best practice to leave all calculations to an external machine for now. I could be wrong, but I also believe that FPM calculations cannot take advantage of the GPU since they're all floating point.

AlecVercruysse commented 4 years ago

I'll close this since the PR has been merged!