Open jospoortvliet opened 8 years ago
Created https://github.com/owncloud/pi-image/issues/2 to share some thoughts step 1.
TL;DR: most of the work can be split into a generic base image for booting & provisioning the system, and a project specific container for the custom feature bits of your project with perhaps a custom package or two for integration bits to make it work smoothly.
I have been working with a Raspberry Pi 2 and a hard drive, using berry boot to create the Raspbian image (Jessie), and then to set up ownCloud. this is essentially this same image.
I have a hard wired connection running locally, the full GUI, and I am syncing 90 GB of files...it is working just fine. I followed the owncloud installation instructions from our documentation, ensured the proper packages were installed and then I added the tarballs. If there is a better way, I am happy to try that instead.
Have you considered using docker ?
Just my 2 cents.
More info: http://blog.hypriot.com/downloads/
If I could throw out another option:
Has anyone here looked at Buildroot?
The project allows to build 'custom' images that include just what is needed. They have direct support for Rasberry Pi devices.
Here's why I mention it: To have a device that's running a service like ownCloud, what are the biggest priorities? In my mind:
Either way, there's a need for creating some sort of image; I would say that building an image using Buildroot would help achieve these goals much more directly than taking an image like Rasbian (with potentially 100s of extra packages) and trimming it back to fit the need.
I've been able to build a custom RasPi image that can boot in just over 4 seconds. Buildroot allows for including only software which is needed and can be easily adapted for other arch-s (ARM, x86, x86_64...). For the 'power users', package managers can be included, but for the 'regular Joes', each update can be provided as a new Buildroot image, say, once every few months or when the ownCloud Server software has a revision.
Thoughts?
My 2 cents follows:
Let's start with some rants :-)
Another suggestion:
I come from openSUSE world and in there we have this tool called kiwi (https://github.com/openSUSE/kiwi) that makes images that can be easily deployed on SD card or harddrive and on first boot do partition enlarging to fill all available space. Apart from that those are build by OBS which also builds all openSUSE packages. Thus we would get new image on every update of the distro and when people would need to replace SD card or harddrive, they'll can get updated image and not something we put together a year ago. Also integrating it with git should be pretty simple - github has OBS hooks and thus we can get packages and image on every commit.
Well to reply to the follow up comments:
@jmaciasportela: It's not clear to me whether the hyperiot image is set up for partitioning to take advantage of the WD disks in a sane way. Also the image is suspiciously large. A minimal Debian installation should weigh in at less than 200MB or so. (Based on the fact that a simple multistrap with locales, bash, ssh and systemd produces a 55M tarball.) That's still orders of magnitude more than e.g. PuppyLinux and the like (which run a full GUI stack in about the same amount of space) but it's OK. Finally no source is a no go, I'd say. Maybe I didn't look properly?
@tomswartz07: Buildroot: speaking from exeprience it's fairly easy to use and doable within the timeframe but has the downside (as mentioned) that you're very much on your own to keep providing new/updated packages. There's no real distro to speak of, and the develop/manual upgrade scenario involves setting up a complete cross-build toolchain and so on and so forth. It's not exactly difficult if you're somewhat familiar with building things yourself and cross compiling (assuming a good board support package, that is: a broken board support package or kernel config is no fun), but it imposes a steep learning curve for the power user type who does not necessarily run Linux on the desktop/laptop but is otherwise quite willing/able to drop to a Linux command line over SSH or something.
Also, in terms of size the owncloud deps probably negate most of the real space savings because owncloud + your database of choice + deps is likely just about as big as your carefully crafted Buildroot base image.
Also minor nitpick but if you it 'properly' (i.e. build the whole toolchain yourself for maximally reproducible results) you quickly wind up with GBs of disk space devoted to copies of sources, up to 3 complete compiler toolchains (host -> host, so as to have a support compiler base to start from), (host -> target, so as to be able to cross compile), (target -> target, so as to have a 'native' compiler for the board) and so on. It takes hours to prepare even a small image on a i5 2500K (based on memory: I actually did once do a Buildroot with a kernel, a basic OS userland and nginx, php5, fastcgi).
@miska: OpenSUSE: yeah that's basically the same story just s/Debian/openSUSE/ apart from the objection to a container. For me, the main motivation for the container thing is two-fold:
But the cost is the extra layer of indirection, as you say. As for me I tend to prefer Debian over OpenSUSE. My approach would be to use a tool like multistrap to put together the image in combination with a bit of details/commands cribbed from the freedom-maker project (who do much the same, using vmdebootstap to setup a freedom-box flavour of Debian and package it up in a nice disk image ready to boot OOTB). As for custom packages: IIRC, OBS does do debs, too. (Say that five times fast.)
@cmacq2 1) Well, regarding the IO, if the whole system should be on HDD, then SD card can be used as bootloader only and mount rootfs directly from harddrive. Then everything is in there and I see no issue making sure all the IO goes there - no need to alter anything.
2) One thing I don't see is if everybody wants to do his thing/project, how all the different containers would work together. I mean Debian/openSUSE/Buildroot containers all with their own instance of ownCloud?
1) A possibility (using a setup with /boot, the kernel and DTB on the SD card) to be sure. Means the SATA disks must come preformatted. Then also you can't take advantage of the split set up to automatically recover functionality easily if a failed disk is replaced or the array is "upgraded". The nice thing about the SD card in this read-only scenario is that a user could, theoretically simply plug in disks in the SATA slot and the base image on the SD card could automatically provision them: making upgrade paths relatively easy (backup data -> replace -> wait a bit -> import data -> done). 2) You pick the project/feature set you want for a particular service and run the corresponding container. You shouldn't (in principle) run multiple containers that are intended to provide the exact same service. Of course you can run additional containers which provide orthogonal services and you can have them share data files using bind mounts (e.g. systemd-nspawn makes this very easy) and even set up networking among them beyond simply tunneling it to/from the host interface (eth0)...
cc @ezraholm50 Please read this --^
@cmacq2 ad 2, I know how containers work, but the tricky part is that what we are mostly going to provide is going to be tightly integrated in ownCloud - so it will be ownCloud apps and I don't see any easy way how split apps into various containers.
Could anyone update me on the progress?
@ezraholm50, please read this repos issues. Maybe @jospoortvliet could help you fill in the blanks? (Ezra is on the Tech and Me crew)
Western Digital Labs will create 500, ownCloud branded kits for the Raspberry Pi 2 to create self-hosted servers. The kt contains a 1TB hard drive, some cables, a case and a SD card.
They will start to assemble these during February 2016. We have some flexibility here and could push that out a bit if we want.
Our job is to create a SD card image and possibly also a hard drive image which, if pre-installed with the kit above, helps people get their ownCloud up and running as easily as possible. Features we are looking for:
A rough todo would be something like this:
Each of these is relatively independent; and each of these can be done by multiple people using multiple approaches, at least initially. Of course, in the end, we need one solution.
If everybody who wants to work on one of these would reply here, linking to an issue they create, and track progress there, we can all sync up.