Open mingwandroid opened 3 years ago
Hi. I'm not familiar with either of those systems, but I'm open to it. Could you provide some pros/cons for what you're proposing and briefly how it would be done?
Thanks for the reply! For me, I am just more used to developing things on desktop machines and am a software packager by trade.
Apart from the mentioned stuff, I'd like to get Mycodo running on official Ubuntu for Raspberry PI, and for Aarch64. This is because it's the only OS I've found that handles my SSD adapter correctly, the very latest Raspberry OS gives UAS errors after about 20 minutes of usage.
For the end-user, I'd like them to be able to install and run as much of the 'stack' as possible on any linux distro, macOS or even on Windows. Whilst at the end of the day, I'd only use this for development, as I do think the raspi is the best tool for this job, I'd be more than happy to maintain the build scripts (for MSYS2, PKGBUILDs and for conda, recipes) as best I can.
Obviously certain parts are quite raspi specific, such as the GPIO and those features would have to be stubbed out when running on e.g. Windows.
I would like an end user to be able to run (on Windows, mac or Linux):
conda install -c conda-forge mycodo
or (on Windows):
pacman -S mingw-w64-x86_64-mycodo
.. and have as much running as possible.
I do not think the job is too difficult since conda (via conda-forge in particular) provides many open source libraries and MSYS2 has a rapidly expanding Python package suite too. I haven't looked into the situation with InfluxDB on Windows but know that conda-forge have packages already.
The con is that if I go ahead and release really terrible packages of Mycodo to either distribution, and those packages become wildly popular (hopefully if they're terrible, they will not be!), but if they are, some people could incorrectly come and ask upstream for support without coming to me first (I would list myself as maintainer in the packages of course!)
Alternatively, I do not need to package it at all, I'd be happy to just provide the build scripts, either here in a PR, as a kind-of 3rd-party / useful-hacking-tools thing or in the respecitive source repositories for (unreleased) recipes for MSYS2 and conda-forge.
MSYS2 is a GCC-compiled distro focused on Open Source and is very active, popular and easy to contribute to. It forms the basis of Git for Windows and the Gnome Software Stack (such that that exists on Windows!)
conda-forge is similar, but with a more Python-ic and scientific bias but provides consistency across the 3 main desktop OS types.
Cheers!
p.s. Having said all that, the new raspi400 does make a pretty good development machine, even if wiringpi final release doesn't support it (nor aarch64 AFAICT?).
Thank you for the explanation. I've looked into each and I think it would be a great addition. I think you being the maintainer is fine, as you have the most experience. Are there any changes to Mycodo itself that you think are appropriate for what you're proposing? There have been several things I've wanted to change for a while to make Mycodo more compatible with non-Pi environments (e.g. removing references to the user "pi", add alternatives to GPIO such as a breakout board, etc.). I'm happy to help any way I can. Thanks for taking this on.
Brilliant news, thank you for being so supportive!
Here are a list of the tasks I have in mind at present, thoughts welcomed:
Cheers.
That sounds good. Just a few questions:
My interest levels are high and to some small degree time of the essence (my peppers are growing already!).. but life and work, as I'm sure you're all to aware, can get in the way of hobbies! Who knows ...
I can work in my own fork and make PRs to your upstream. That's the normal github way I'm used to.
I will update the build scripts so other OSes work. Probably I'll use some bash associative arrays to map distros to tool names and args and one mapping package names from debian names to (lists of) those of the distro name in question (debian, arch/manjaro, msys2, conda).
Arm64 and ubuntu/debian support is easy I think.
Influx is Open Source? Any reason not to use system packages or to build it from source?
Outside of this stuff, my first contribution of any substance will, I think, be around looking into adding filtering pipeline support to the camera capture stuff (if it's not there already?) and adding a filter (via ffmpeg's frei0r plugins) to remove led light flicker. After that my focus will be on deep learning, and using computer vision to track plant health. The led flicker is a data cleansing step before that (also I dislike that flicker when I see it on YouTube videos etc).
The wiringpi fork is at https://github.com/WiringPi/WiringPi
Cheers.
I would also be interested in supporting a Gentoo port on amd64 and arm64. It is also likely that I could help with the conda port as well.
Please note that the primary Gentoo RPi4 maintainer has recently stepped down (see post on Oct 30, 2020: https://github.com/sakaki-/gentoo-on-rpi-64bit ). That said, I have used Gentoo as my primary OS since 2002, and have maintained it on everything from small laptops to Beowulf clusters. If I can port some of my projects to Gentoo on RPi I am willing to set up as a Proxy Maintainer for the necessary toolchains. BTW, I have two interests in Mycodo: the first is supporting a series of growth chamber and greenhouse experiments that surround the effects of salinity on germination (basically seedbank experiments initially focusing on Taxodium distichum (bald cypress) -- requirements here are to measure and accurately maintain salinity levels between 50 and 200 ppm), and a second set of experiments on the effects of groundwater depletion (see Mahoney & Rood 1998: https://link.springer.com/article/10.1007/BF03161678 ). The second interest is to automate and log some of the greenhouse operations for a small farm I recently purchased.
@ebo, I like your plans.
Thanks for the encouragement. This weekend I'll break away a little time to do a completely clean install (instead of installing on an already running machine), and see how it goes. Anyway, I will try to let you know how it goes. The biggest initial step with the Gentoo RPi repo is to make sure I have an appropriate cross-compiler toolchain set up for the RPi so that I can use DistCC to cut down on the compile time. As a note for the non-Gentoo'ers in the audience, Gentoo was initially set up as a source-only distribution, but has since also come to support binary builds as well. Basically the intent was to set up the compiler flags and system configuration, to compile the programs to run as efficiently as possible. It is easy to build efficient binaries for one or two dedicated platforms (like the RPi3 and RPi4), but is VERY different when you have to support any tom, dick or Intel box being built for. On the plus side, the configure management package (portage) allows very fine grained maintenance to support specific versions of packages and libraries, and also to build with provided patches without having to wait for upstream providers to get things patched. I really like Gentoo for its ease of maintenance, but it is by far not the easiest to initially set up. That said, since this will be for RPi3/4 only, it will be a LOT simpler and straight forward for the RPi community, and Mycodo in particular.
@ebo, that's kind of funny, I was a medium-term user of Gentoo years ago in the raspberry pi 2 days. It was the slowness of recompiling everything on that SBC that finally broke me and caused me to find ArchLinux instead (from where I went on to help out on MSYS2, even pushing for pacman as its package manager from the start!)
Well, binary packages on Gentoo sounds fun and the Pi4 is pretty capable of compiling a few bits and pieces. I might be tempted to give it a try again and this sounds like a good excuse for that to me.
Has there been any progress on this issue?
For a couple of years I've been running a modified version of Mycodo 5.6.9 on Ubuntu, generic x86 virtual machine for testing and generic armv7 for production (my project does not use GPIO, I2C etc., only Wifi inputs/outputs driven through shell scripts).
The bulk of changes that I had to make in the code to get Mycodo running on that setup have been implemented upstream by Kyle since then. I'm updating to the latest release 8.12.6 and by now this generic Ubuntu support only requires minimal tweaks to the install scripts. Should I share some PRs for this?
Speaking for myself, I ran into a couple of snags back in Dec 2020 with developing a portage ebuild to support everything except for the GPOI pins. To start I wanted to support everything that utilizes the USB connectors (so I could test use the sensors on regular laptops in addition to, or leading up to, RPi support). Anyway, I got distracted by other thing and this got tabled. I still have the sensors on my desk to poke at, and will take a quick look at the changes. I have a couple of other projects that need to be finished before I can dedicate a major focus, but I will give it another poke or two.
Maybe @antoinechauveau and I can sync up off list to see if we can gather all available changes required to run with generic Ubuntu (and hopefully Gentoo) support.
I just started taking another look into this. I got quite a bit further (I think I missed some stuff 8+ months ago).
Anyway, I am curious if people want to weigh in on a couple of particular build/install questions. In particular looking at the Docker build, it looks like pretty much everything was built/installed into the mycodo user's home directory (/home/mycodo, /users/mycodo, or similar). Is this the preferred method? To be clear, I do not know of any other packages which are managed this way, but it is sometimes seen that they install in /opt and /usr/local. Does anyone have an opinion where these should be installed on a stock OS? For the moment I will set things up to install in /opt/mycodo, but it is likely that it should be moved elsewhere.
I opened PR https://github.com/kizniche/Mycodo/pull/1099 with the simple changes to the setup script required for successful install on non-dockerized Ubuntu 20.04 (and also various warnings etc. fixes for good measure). Tested with amd64 architecture, but expected to work on others too.
My production board at the moment is armv7 and doesn't support Docker, so I don't have much of an opinion on that second installation path...
Thanks for the PR. The docker install should already work on Debian-based systems, though I haven't attempted an install in a while.
Has this PR been merged yet, or should I check the fork?
On Oct 5 2021 2:53 PM, Kyle Gabriel wrote:
Thanks for the PR. The docker install should already work on Debian-based systems, though I haven't attempted an install in a while.
never mind. It looks like it was pulled in via commit c8d4aee4a88031001863fd3b836019eea16ab326
EBo --
https://github.com/nargetdev/balena-mycodo
You should be able to run
docker-compose up
followed by..
make initialize-influx-2-db
And then you will need to go into the settings (gear icon) on mycodo web interface and change options in time series database to connect influxdb in docker context as follows..
Database: Influxdb 2.x
Hostname: mycodo_influxdb
Mileage may vary ..
I am running on Ubuntu and OS X
Hi,
I love this project, so nicely put together I must say. I had it up and running on docker on my raspi (running Manjaro) in no time at all.
I also like how clean it all was so I'm somewhat hesitant with this feature request (one for which I am happy to put in the work, I just want to see how amenable Mycodo is to the idea!)
My dayjob involves packaging software (for the Anaconda Distribution) and I contribute package recipe fixes to conda-forge. I also helped out with MSYS2 in the early days. I'd like to have a go at adding packages to both MSYS2 and conda-forge for this.
What do you think? Stupid idea? Should I just stick with my raspis (I have no shortage of them!)?