This really applies to the stations, but its implementation would focus on the MS, so I'm opening the issue here.
Problem
@DesignChallengeGuy is looking to see if last-minute SD card copying/cloning can be avoided. A significant amount of effort is expended each year the night before the competition to get the latest code and configuration onto roughly 20 SD cards--approximately six instances of each of three station types, plus some additional ones such as this year's kiosk.
Option 1
@DesignChallengeGuy suggested remote updates either by "push" or "pull". For example, the MS keeps the most up-to-date images, patches, apps, data, etc. When a station boots, it contacts the MS, and the MS pushes any needed updates to the station.
Alternatively, the MS could be passive and the station performs the equivalent of an "apt-get update" before launching the correct application.
Option 2
Option 1 made me think of my current setup at home.
Overview
I have a box that hosts the files necessary to boot several clients.
Each client is a PC that has no hard drive physically installed. When the client boots, it gets its boot image over the network, which it uses to boot up and also mount the root filesystem and all other filesystems.
tl;dr
I have one box running a BOOTP server and a TFTP server to serve PXE boot images, and an NFS server serving the root partition files of any diskless clients.
I believe my router is configured to point to the BOOTP/TFTP server when any diskless clients or PXE clients boot up. FYI it's also set up for static DHCP, so all IP address configuration is managed right in the router and not at each client.
When a client boots, the router gives it a fixed IP address based on its MAC address.
The BOOTP server uses this IP address or a portion of it to look up the boot image to provide, and the TFTP server provides the boot image, which is a bit over 100 MB usually.
Once the client loads the boot image into memory, it uses the image to boot up normally as if it booted off of a physical hard drive or CD-ROM.
As part of the boot-up, the client mounts its root partition over the network almost the same way it mounts other NFS mount points.
Proposed Design
It looks like a Raspberry Pi can be booted over the network similarly to what is described above.
It is purely used to boot over the network and nothing else. Since it has nothing for the design challenge, there should never be any need to update it.
There will be no need to archive anything off of this following the design challenge.
This image will be useless for sending out to the schools for testing.
One SD card image for the MS.
This will contain the MS image as usual.
This will be configured to run appropriate services for serving boot images and other files needed for the stations to boot.
Ideally, this would contain a single boot image for all the stations, i.e. there would be nothing unique per station requiring different images.
Ideally, this would contain a single station root filesystem to export over NFS to the stations.
Ideally, this would contain a single station /opt partition to export over NFS to the stations, so a single git pull would update the code and configuration that's used for all 20 stations.
This really applies to the stations, but its implementation would focus on the MS, so I'm opening the issue here.
Problem
@DesignChallengeGuy is looking to see if last-minute SD card copying/cloning can be avoided. A significant amount of effort is expended each year the night before the competition to get the latest code and configuration onto roughly 20 SD cards--approximately six instances of each of three station types, plus some additional ones such as this year's kiosk.
Option 1
@DesignChallengeGuy suggested remote updates either by "push" or "pull". For example, the MS keeps the most up-to-date images, patches, apps, data, etc. When a station boots, it contacts the MS, and the MS pushes any needed updates to the station.
Alternatively, the MS could be passive and the station performs the equivalent of an "apt-get update" before launching the correct application.
Option 2
Option 1 made me think of my current setup at home.
Overview
I have a box that hosts the files necessary to boot several clients.
Each client is a PC that has no hard drive physically installed. When the client boots, it gets its boot image over the network, which it uses to boot up and also mount the root filesystem and all other filesystems.
tl;dr
I have one box running a BOOTP server and a TFTP server to serve PXE boot images, and an NFS server serving the root partition files of any diskless clients.
I believe my router is configured to point to the BOOTP/TFTP server when any diskless clients or PXE clients boot up. FYI it's also set up for static DHCP, so all IP address configuration is managed right in the router and not at each client.
When a client boots, the router gives it a fixed IP address based on its MAC address.
The BOOTP server uses this IP address or a portion of it to look up the boot image to provide, and the TFTP server provides the boot image, which is a bit over 100 MB usually.
Once the client loads the boot image into memory, it uses the image to boot up normally as if it booted off of a physical hard drive or CD-ROM.
As part of the boot-up, the client mounts its root partition over the network almost the same way it mounts other NFS mount points.
Proposed Design
It looks like a Raspberry Pi can be booted over the network similarly to what is described above.
http://blogs.wcode.org/2013/09/howto-netboot-a-raspberry-pi/
Envision the following:
Option 3..N
Does anyone have any other ideas?