The scripts below are currently based on Isaac Sim 4.1.0 and Isaac Lab 1.1.0. The scripts should work on other versions of Isaac Sim and Isaac Lab, but you may need to modify the scripts accordingly.
Docker Image | Isaac Sim | Ubuntu |
---|---|---|
j3soon/omni-farm-isaac-sim:4.2.0 |
4.2.0 | 22.04.3 LTS |
j3soon/omni-farm-isaac-sim:4.1.0 |
4.1.0 | 22.04.3 LTS |
j3soon/omni-farm-isaac-sim:4.0.0 |
4.0.0 | 22.04.3 LTS |
j3soon/omni-farm-isaac-sim:2023.1.1 |
2023.1.1 | 22.04.3 LTS |
Docker Image | Isaac Lab | Isaac Sim | Ubuntu |
---|---|---|---|
j3soon/omni-farm-isaac-lab:1.1.0 |
1.1.0 | 4.1.0 | 22.04.3 LTS |
Skip this section if you already have Omniverse Farm installed.
Before proceeding with the installation, make sure you have modified the following values:
max_capacity: 32
values.yaml
file in the omniverse-farm-x.x.x.tgz
file:
capacity:
# -- Specify the max number of jobs the controller is allowed to run.
max_capacity: 32
and modify it to something like:
capacity:
# -- Specify the max number of jobs the controller is allowed to run.
max_capacity: 1024
(Note that the pre-installation steps are not tested on a real machine yet...)
Follow the official installation guide to install Omniverse Farm.
After installation, you should have a installed Farm Queue, and one or more Farm Agent workers installed, which can be connected to the queue in subsequent steps. All Farm Agents should have access to the USD scenes that would be used in the submitted jobs through Nucleus.
Follow this example to test your Omniverse Farm installation. First, submit a rendering job through Movie Capture. Next, connect a Farm Agent to the Farm Queue, and make sure the job finished successfully by checking the output files. Please skip the Blender decimation example in the documentation, as it is not relevant to this repository.
This repo is tested on Omniverse Farm 105.1.0 with Kubernetes set up. The scripts are tested within a environment consists of multiple OVX server nodes with L40 GPUs, a CPU-only head node, along with a large NVMe storage server. These servers are interconnected via a high-speed network utilizing the BlueField-3 DPU and ConnectX-7 NIC. See this post and this post for more information. However, the scripts in this repository should work on any Omniverse Farm setup, even on a single machine.
If you forgot to perform the pre-installation steps, you can still perform them after installation:
max_capacity: 32
kubectl edit cm/controller-capacity -n ov-farm
Change
apiVersion: v1
data:
capacity.json: |2-
{
"max_capacity": 32
}
kind: ConfigMap
to
apiVersion: v1
data:
capacity.json: |2-
{
"max_capacity": 512
}
kind: ConfigMap
save and close the file. Then run:
kubectl delete pods/controller-0 -n ov-farm
and wait for the controller pod to automatically restart.
Clone this repository:
git clone https://github.com/j3soon/omni-farm-isaac.git
cd omni-farm-isaac
Install jq
for JSON parsing. For example if you are using Ubuntu:
sudo apt-get update
sudo apt-get install -y jq
Fill in the Omniverse Farm server information in secrets/env.sh
, for example:
export FARM_API_KEY="s3cr3t"
export FARM_URL="http://localhost:8222"
export FARM_USER="j3soon"
export NUCLEUS_HOSTNAME="localhost"
Then, for each shell session, make sure to source the environment variables by running the following command in the root directory of this repository:
source secrets/env.sh
In some examples below, we will upload files to Nucleus through omnicli
, you can use the GUI to upload files to Nucleus instead.
All following commands assume you are in the root directory of this repository (omni-farm-isaac
) and have sourced the environment variables file (secrets/env.sh
).
Skip this section if accessing your Omniverse Farm doesn't require a VPN.
There doesn't seem to be a way to use the OpenVPN Connect v3 GUI on Linux as in Windows or MacOS. Instead, use the command line to install OpenVPN 3 Client by following the official guide.
Then, copy your .ovpn
client config file to secrets/client.ovpn
and install the config, and connect to the VPN with:
scripts/vpn/install_config.sh client.ovpn
scripts/vpn/connect.sh
To disconnect from the VPN, and uninstall the VPN config, run:
scripts/vpn/disconnect.sh
scripts/vpn/uninstall_config.sh
These 4 scripts are just wrappers for the openvpn3
command line tool. See the official documentation for more details.
If a previous config is already installed, you must uninstall it before installing a new one. Otherwise, the scripts will create two VPN profiles with the same name, which can only be fixed by using the openvpn3
command line tool directly. Specifically, use the following commands:
openvpn3 sessions-list
openvpn3 session-manage -D --session-path "/net/openvpn/v3/sessions/<SESSION_ID>"
openvpn3 configs-list --verbose
openvpn3 config-remove --path "/net/openvpn/v3/configuration/<CONFIG_ID>"
Save the job definition file and verify it:
scripts/save_job.sh echo-example
scripts/load_job.sh
Then, submit the job:
scripts/submit_task.sh echo-example "hello world" "Echo hello world"
You can remove the job definition file after the job has finished:
scripts/remove_job.sh echo-example
This demo allows running arbitrary shell commands on Omniverse Farm.
Save the job definition file and verify it:
scripts/save_job.sh isaac-sim-dummy-example
scripts/load_job.sh
Then, submit the job:
scripts/submit_task.sh isaac-sim-dummy-example "./standalone_examples/api/omni.isaac.core/time_stepping.py" "Isaac Sim Time Stepping"
# or
scripts/submit_task.sh isaac-sim-dummy-example "./standalone_examples/api/omni.isaac.core/simulation_callbacks.py" "Isaac Sim Simulation Callbacks"
You can remove the job definition file after the job has finished:
scripts/remove_job.sh isaac-sim-dummy-example
This demo allows running arbitrary built-in Isaac Sim scripts on Omniverse Farm.
This script assumes that the Nucleus server has username admin
and password admin
. The commands below will fail if the Nucleus server has a different username and password. In this case, refer to the next section on how to setup Nucleus credentials.
Use omnicli
to upload the script to Nucleus:
cd thirdparty/omnicli
./omnicli copy "../../tasks/isaac-sim-simulation-example.py" "omniverse://$NUCLEUS_HOSTNAME/Projects/J3soon/Isaac/4.1/Scripts/isaac-sim-simulation-example.py"
cd ../..
Save the job definition file and verify it:
scripts/save_job.sh isaac-sim-basic-example
scripts/load_job.sh
Then, submit the job:
scripts/submit_task.sh isaac-sim-basic-example \
"/run.sh \
--download-src 'omniverse://$NUCLEUS_HOSTNAME/Projects/J3soon/Isaac/4.1/Scripts/isaac-sim-simulation-example.py' \
--download-dest '/src/isaac-sim-simulation-example.py' \
--upload-src '/results/isaac-sim-simulation-example.txt' \
--upload-dest 'omniverse://$NUCLEUS_HOSTNAME/Projects/J3soon/Isaac/4.1/Results/isaac-sim-simulation-example.txt' \
'./python.sh -u /src/isaac-sim-simulation-example.py 10'" \
"Isaac Sim Cube Fall"
You can remove the job definition file after the job has finished:
scripts/remove_job.sh isaac-sim-basic-example
This demo allows running arbitrary Isaac Sim scripts on Omniverse Farm by downloading the necessary files, executing the specified command, and then uploading the output files to Nucleus.
If your Nucleus server have a non-default username and password. Use ./omnicli auth [username] [password]
to enter your credentials for uploading files. Alternatively, you can use Omniverse Launcher to perform authentication through a GUI. In addition, use the isaac-sim-nucleus-example.json
job description instead to include your username and password. The job description assumes nucleus-secret
has been added to the K8s secrets by the admin, including OMNI_USER
and OMNI_PASS
. Alternatively, if security is not a concern, you may include the username and password directly through the env
entry in the job descriptions.
Use omnicli
to upload the script to Nucleus:
cd thirdparty/omnicli
./omnicli copy "../../tasks/isaac-sim-simulation-example.py" "omniverse://$NUCLEUS_HOSTNAME/Projects/J3soon/Isaac/4.1/Scripts/isaac-sim-simulation-example.py"
cd ../..
Save the job definition file and verify it:
scripts/save_job.sh isaac-sim-nucleus-example
scripts/load_job.sh
Then, submit the job:
scripts/submit_task.sh isaac-sim-nucleus-example \
"/run.sh \
--download-src 'omniverse://$NUCLEUS_HOSTNAME/Projects/J3soon/Isaac/4.1/Scripts/isaac-sim-simulation-example.py' \
--download-dest '/src/isaac-sim-simulation-example.py' \
--upload-src '/results/isaac-sim-simulation-example.txt' \
--upload-dest 'omniverse://$NUCLEUS_HOSTNAME/Projects/J3soon/Isaac/4.1/Results/isaac-sim-simulation-example.txt' \
'./python.sh -u /src/isaac-sim-simulation-example.py 10'" \
"Isaac Sim Cube Fall"
You can remove the job definition file after the job has finished:
scripts/remove_job.sh isaac-sim-nucleus-example
The aforementioned methods only upload the results after the specified command runs successfully, potentially resulting in loss of results if the command fails. To prevent this, you can mount a persistent volume to the container. The isaac-sim-volume-example.json
job description assumes that nfs-pv
connecting to a storage server through NFS has been added to K8s persistent volume (PV), along with a corresponding nfs-pvc
persistent volume claim (PVC) by the admin. This method allows you to keep the partial results even if the command fails.
This NFS setup is preferable for multiple nodes over using
volumeMounts.mountPath
. The latter mounts the volume to the node where the pod is running, which can become challenging to manage in clusters with multiple nodes.
Use omnicli
to upload the script to Nucleus:
cd thirdparty/omnicli
./omnicli copy "../../tasks/isaac-sim-simulation-example.py" "omniverse://$NUCLEUS_HOSTNAME/Projects/J3soon/Isaac/4.1/Scripts/isaac-sim-simulation-example.py"
cd ../..
Save the job definition file and verify it:
scripts/save_job.sh isaac-sim-volume-example
scripts/load_job.sh
Then, submit the job:
scripts/submit_task.sh isaac-sim-volume-example \
"/run.sh \
--download-src 'omniverse://$NUCLEUS_HOSTNAME/Projects/J3soon/Isaac/4.1/Scripts/isaac-sim-simulation-example.py' \
--download-dest '/src/isaac-sim-simulation-example.py' \
'ls /mnt/nfs' \
'mkdir -p /mnt/nfs/results' \
'./python.sh -u /src/isaac-sim-simulation-example.py 10' \
'cp /results/isaac-sim-simulation-example.txt /mnt/nfs/results/isaac-sim-simulation-example.txt'" \
"Isaac Sim Cube Fall"
You can remove the job definition file after the job has finished:
scripts/remove_job.sh isaac-sim-volume-example
Note that you can remove the --download-src
and --download-dest
options if the script is stored in the persistent volume. In addition, the cp
command here is only for demonstration purposes, the best practice is to directly write the results in the persistent volume. This can be achieved by making the script accept an additional argument for the output directory.
Make sure to follow the Running Isaac Sim Tasks section before moving on to this section.
The demo tasks here assume the aforementioned nuclues-secret
and nfs-pvc
setup. You can modify the job definition files to include your own credentials and persistent volume claim.
In this section, we only uses the j3soon/omni-farm-isaaclab docker image for simplicity. You can build your own docker image with the necessary dependencies and scripts for your tasks. This will require you to write a custom job definition and optionally copy omnicli
when building your docker image.
Save the job definition file and verify it:
scripts/save_job.sh isaac-lab-volume-example
scripts/load_job.sh
Then, submit the job:
scripts/submit_task.sh isaac-lab-volume-example \
"/run.sh \
--upload-src '/root/IsaacLab/logs' \
--upload-dest 'omniverse://$NUCLEUS_HOSTNAME/Projects/J3soon/Isaac/4.1/Results/IsaacLab/logs' \
'ls /mnt/nfs' \
'mkdir -p /mnt/nfs/results/IsaacLab/logs' \
'ln -s /mnt/nfs/results/IsaacLab/logs logs' \
'. /opt/conda/etc/profile.d/conda.sh' \
'conda activate isaaclab' \
'./isaaclab.sh -p source/standalone/workflows/rl_games/train.py --task Isaac-Ant-v0 --headless'" \
"Isaac Lab RL-Games Isaac-Ant-v0"
You can remove the job definition file after the job has finished:
scripts/remove_job.sh isaac-lab-volume-example
This demo allows running arbitrary built-in Isaac Lab scripts on Omniverse Farm.
For headless tasks, simply follow the official guide.
If your task requires a GUI during development, see this guide.
Refer to scripts/docker for potential useful scripts for running Isaac Sim tasks locally.
scripts/save_job.sh
script only allows the use of a single argument args
. You need to modify the job definition file and script to include more arguments if necessary.scripts/save_job.sh
) and submitting a task that refers to that job definition (scripts/submit_task.sh
) doesn't seem to be always in sync. Please submit some dummy tasks to verify that the job definition changes are reflected in new tasks before submitting the actual task.active_deadline_seconds
) for K8s pods are set to 86400
(1 day) by Omniverse Farm. If the task takes longer than 1 day, the task will be terminated. After the K8s pod has been terminated, the K8s job will restart it once (backoffLimit: 1
) even though is_retryable
is set to False. This restarted K8s pod cannot be cancelled through the Omniverse UI. You can modify the time limit by changing the active_deadline_seconds
field in the job definition file, we set it to 10 days in all job definitions, which is enough for most tasks.backoffLimit: 1
) after K8s pod termination appear to happen when the command exits with a non-zero status code. This issue can be observed by running the following:
kubectl get jobs -n ov-farm -o yaml | grep backoffLimit
cm/controller-job-template-spec-overrides
ConfigMap doesn't seem to allow changing the backoffLimit
field.nvidia.com/gpu
field in the job definition file.job_spec_path
is required for options such as args
and env
to be saved. If the job_spec_path
is null
, these options will be forced empty. In our examples, we simply set it to a dummy value ("null"
). See this thread for more details./isaac-sim
). This behavior may result in errors such as:
/isaac-sim/kit/python/bin/python3: can't open file '/isaac-sim/ ': [Errno 2] No such file or directory`.
submitted
state.running
state.tar
on a mounted volume, make sure to use the --no-same-owner
flag to prevent the following error:
tar: XXX: Cannot change ownership to uid XXX, gid XXX: Operation not permitted
running
state.omnicli
, sometimes the following error occurs:
Error: no DISPLAY environment variable specified
which can be fixed by running the omnicli
command in a terminal with desktop support, and enter username and password through the browser.
Error: Connection
This may be due to incorrect Nuclues credentials or incorrect Nucleus server URL. Try launching a sleep infinity
task and exec into the pod to debug the issue:
kubectl exec -it -n ov-farm <POD_ID> -- /bin/bash
# in the container
env | grep OMNI
# check that `OMNI_USER` and `OMNI_PASS` are set correctly
apt-get update && apt-get install -y iputils-ping
ping <NUCLEUS_HOSTNAME>
# check that the Nucleus server is reachable
Omniverse Farm webpage logs show the following error when using custom built docker image:
#### Agent ID: controller-0.controller.ov-farm.svc.cluster.local-1
/bin/bash: - : invalid option
Process exited with return code: -1
This may be due to building on Windows, try a Linux environment instead.