SOL-Space-Operating-Linux / meta-sol

Yocto layer for Space Operating Linux
MIT License
6 stars 1 forks source link

SOL (Space Operating Linux)

Table of Contents

Setting up Environment for Local Development

(This section may be skipped if you are using the SSRL build server.)

Preparing Containers, Directories, and Files

Note: If you plan on building on your own machine, be aware that Yocto builds from scratch can take up a LOT of CPU time (several hours) and drive space (hundreds of GB).

This process was tested on an x86_64 laptop running Ubuntu 22.04 but should work on any system that can support Dockers. The actual build takes place inside of a CROPS Poky container based on Ubuntu 18.04 which has all the necessary build dependencies pre-installed. Make sure to install Docker first and optionally modify user groups to allow non-root user access.

A Note on Terminals

Two terminals are used to complete this process:

Create the Poky Work Directory

Download the NVIDIA SDK Files

Run the CROPS Docker Container and Clone the Necessary Repositories

Modifying Build Configurations

Once Poky and all the required meta layers are cloned, you must source the bash environment provided with Poky. This will put useful tools (most importantly bitbake) in your path that will be used to build the TX2i image. This operation must be done every time you logout/start a new terminal.

This will put you in the tx2i-build folder, and create it if you have not already done this before. This folder will eventually contain all downloaded files, build files, and images. You will find that there is only a conf folder that contains the bblayers.conf and local.conf configuration files.

bblayers.conf: Contains directory paths for all the required meta layers for a build

local.conf: Contains all user defined configurations for the build target

Reference https://www.yoctoproject.org/docs/3.0/ref-manual/ref-manual.html#ref-structure for more information on the directory structure of the Yocto project.

Note: You can find a template for these two files under meta-sol/conf/*.conf.template.

Your bblayers variable in the bblayers.conf should look like the following:

BBLAYERS ?= " \
  /workdir/poky/meta \
  /workdir/poky/meta-poky \
  /workdir/poky/meta-yocto-bsp \
  /workdir/poky/meta-tegra \
  /workdir/poky/meta-tegra/contrib \
  /workdir/poky/meta-sol \
  /workdir/poky/meta-openembedded/meta-oe \
  "

The next step is to tell BitBake what machine to target, where the NVIDIA SDK files are located, and what version of CUDA to use.

Note: In the code snippets, "+" at the beginning of a line means "add this line" (but without the + symbol) and "-" at the beginning of a line means remove this line.

Manually Update the tegra-eeprom Recipe

Due to upstream changes in tegra-eeprom-tool, we need to remove the current recipe and replace it with the upstream one, and then modify it to work with this version of Yocto.

Building the Image with BitBake

It's finally time to kick off the build. Keep in mind that this can take a very long time. Subsequent builds should be much quicker depending on what is changed and if the tmp, cache, downloads, and sstate-cache directories have not been deleted.

Note: If you are attempting to build for a Jetson Nano, this README does not have all steps necessary to successfully build. Please reference the meta-tegra repository for more information on Jetson Nano.

Flashing the TX2/TX2i

All completed images are saved to the $SOLWORK/tx2i-build/tmp/deploy/images directory. meta-tegra includes an option to build an image that comes with a script to flash the TX2/TX2i. This was included in the image files with IMAGE_CLASSES += "image_types_tegra" and IMAGE_FSTYPES = "tegraflash". There will be a file named something similar to core-image-sol-jetson-tx2i.tegraflash.zip.

  1. Download the zip file to your host machine that you will flash the TX2/TX2i from and unzip.

  2. Connect the TX2/TX2i to your host machine with a micro-usb cable.

Note: If your computer does not detect the TX2/TX2i at step 4 it could be because a cable without data lines was used.

  1. From a cold boot, hold down the recovery button and keep it held. Press the power button. Then, press the reset button (there should be a quick flash of the dev board lights). Finally, release the recovery button after 2 seconds.

  2. If the TX2/TX2i is successfully put into recovery mode, you should detect an NVIDIA device with the lsusb command.

  3. To flash the device run the following command from within the unzipped directory:

    sudo ./doflash.sh

The TX2/TX2i should automatically reboot with the new image. Login with root user and no password.

To verify that CUDA is working enter the following commands.

Note: cuda-samples is only included in the core-image-sol-dev image.

cd /usr/bin/cuda-samples
./deviceQuery
./UnifiedMemoryStreams

Useful Commands

Custom Feature Selection

This section is intended for those who wish to use (or build upon) meta-sol for their own image.

It is recommended to start with an image bitbake file (examples are in recipes-core/images) and to require base images (such as core-image-sol and core-image-sol-redundant-live) as desired.

Along with the image file, a package group (examples in recipes-core/packagegroups) can be created to specify the specific features/recipes to include.

Finally, if necessary, a new machine configuration can be created (examples in conf/machine/) to enable-disable specific features, such as the initramfs image and partition layout template. Recommended starting points are jetson-tx2-sol-redundant-live.conf/jetson-tx2i-sol-redundant-live.conf, and they have a few different options commented out.

List of Useful References