Open owiil opened 1 month ago
Hello @owiil!
We use readOnlyRootFilesystem
to increase the security of Jitsi Meet pods by default.
I'll check for a good way to add it to the chart and set the file mode to 0755
.
As a side-track: is it possible for you to publish the finalize.sh
you're using to upload to Azure Blob Storage? I can include it to this repo as an example upload script (and maybe add an S3-friendly version as well).
Hello @spijet !
I’m sharing the solution I used for uploading videos to Azure Blob Storage after recordings in Jibri. Below is the finalize.sh
script, along with the CronJob
that I successfully implemented to handle the uploads. Additionally, I created a Persistent Volume (PV) and Persistent Volume Claim (PVC) to store the recordings locally before uploading.
finalize.sh
Script
This is the script I’m using to upload the recordings to Azure Blob Storage:
#!/bin/bash
# Variables
RECORDING_DIR="/data/recordings"
AZURE_STORAGE_URL="https://yourstorageaccount.blob.core.windows.net/container"
SAS_TOKEN="<your-sas-token-here>"
DESTINATION_PATH="$AZURE_STORAGE_URL?$SAS_TOKEN"
# Install azcopy if not already installed
if ! command -v azcopy &> /dev/null; then
echo "Installing azcopy..."
wget -O azcopy.tar.gz https://aka.ms/downloadazcopy-linux
tar -xvf azcopy.tar.gz --strip-components=1 -C /usr/local/bin
fi
# Check if the recordings directory exists
if [ -d "$RECORDING_DIR" ]; then
echo "Recordings directory found. Preparing to upload."
else
echo "Recordings directory not found. Exiting."
exit 1
fi
# Upload each file in the recordings directory to Azure Blob Storage
for file in "$RECORDING_DIR"/*; do
if [ -f "$file" ]; then
echo "Uploading $file to Azure Blob Storage..."
azcopy copy "$file" "$DESTINATION_PATH"
if [ $? -eq 0 ]; then
echo "File $file uploaded successfully."
else
echo "Failed to upload $file."
fi
fi
done
# Optionally: Remove the recordings after upload
rm -rf "$RECORDING_DIR"/*
CronJob
for Automating the Upload Process
I used the following CronJob
to periodically check the recordings directory and upload any new recordings to Azure Blob Storage. This runs every 15 minutes.
apiVersion: batch/v1
kind: CronJob
metadata:
name: upload-videos
namespace: jitsi-meet
spec:
schedule: "*/15 * * * *" # Runs every 15 minutes
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
ttlSecondsAfterFinished: 0
template:
spec:
restartPolicy: Never
containers:
- name: upload-to-azure
image: mcr.microsoft.com/azure-cli
command: ["/bin/sh", "-c"]
args:
- |
echo "=== Starting the upload process of video folders ==="
# Uncomment the following lines to add a static IP for the storage account if DNS resolution fails
# echo "Adding entry to /etc/hosts for storage account IP resolution"
# echo "<resolved-storage-account-ip> yourstorageaccount.blob.core.windows.net" >> /etc/hosts
# Uncomment the following lines to verify the modification in /etc/hosts
# echo "Contents of /etc/hosts after modification:"
# cat /etc/hosts
echo "Checking for new folders to upload..."
# Check for new folders to upload
for dir in /data/recordings/*; do
if [ -d "$dir" ]; then
echo "Folder found: $dir"
# Upload only if the content does not exist in Azure Blob Storage
echo "Checking if the content of folder $dir already exists in Azure Blob Storage..."
# For each file in the folder, check if it already exists
files_to_upload=false
for file in "$dir"/*; do
filename=$(basename "$file")
if [ -f "$file" ]; then
echo "Checking if the file $filename already exists in Blob Storage..."
if az storage blob exists --account-name yourstorageaccount --container-name container --name "$filename" --sas-token "<your-sas-token-here>" | grep -q '"exists": true'; then
echo "The file $filename already exists. Skipping."
else
echo "The file $filename does not exist. It will be uploaded."
files_to_upload=true
fi
fi
done
# Upload the entire folder if at least one file doesn't exist in Blob Storage
if [ "$files_to_upload" = true ]; then
echo "Uploading folder $dir to Azure Blob Storage..."
az storage blob upload-batch --source "$dir" --destination container --account-name yourstorageaccount --sas-token "<your-sas-token-here>"
echo "Upload of folder $dir completed."
else
echo "All files in folder $dir already exist. No upload needed."
fi
else
echo "No new folders found to upload."
fi
done
echo "=== End of the video folders upload process ==="
volumeMounts:
- name: jibri-storage
mountPath: /data/recordings
volumes:
- name: jibri-storage
persistentVolumeClaim:
claimName: temp-storage-pvc
Persistent Volume (PV) and Persistent Volume Claim (PVC) To store the recordings temporarily before uploading them to Azure Blob Storage, I created a Persistent Volume (PV) and a Persistent Volume Claim (PVC):
Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: temp-storage-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/temp-storage
Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: temp-storage-pvc
namespace: jitsi-meet
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Helm Chart values.yaml
Configuration
Finally, I edited the values.yaml
file for the Jibri configuration to enable persistence using the created PVC:
jibri:
persistence:
enabled: true
existingClaim: temp-storage-pvc
This solution works well for my environment, handling uploading writes to Azure Blob Storage and managing local storage with Kubernetes persistent volumes. but I would like to use finalize.sh directly in the Jibri Pod, Let me know if you have any questions or need more details!
I'll come up with a solution in the coming days and will ping you here when it's ready for testing. Thank you!
Hello again, @owiil!
I've just pushed 8f9235c with the addition of .Values.jibri.custom.other._finalize_sh
. Can you please test it and report back? If all is well, I'll tag a new release right away.
Hi,
I am encountering an issue when trying to use a
finalize.sh
script inside a Jibri pod in Kubernetes. The script is provided via a ConfigMap and mounted inside the pod, and the script path is set using the environment variableJIBRI_FINALIZE_RECORDING_SCRIPT_PATH
with the value/config/finalize.sh
. However, since ConfigMap volumes are mounted as read-only, I am unable to apply execution permissions to the script, causing it to fail during the Jibri recording workflow.Problem Details
finalize.sh
is mounted at/config/finalize.sh
.JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
environment variable is correctly set to point to the script path.chmod +x /config/finalize.sh
, the following error occurs:chmod: changing permissions of '/config/finalize.sh': Read-only file system
Workarounds Attempted
1. Copying to a writable directory:
cp /config/finalize.sh /tmp/finalize.sh && chmod +x /tmp/finalize.sh
Result: The script was copied successfully, but it still failed to execute, even after pointing the
JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
environment variable to the new path (/tmp/finalize.sh
).2. Using an Init Container:
Result: The chmod command failed again due to the ConfigMap being mounted as read-only.
Expected Behavior I would appreciate an official solution or guidance on how to run scripts like
finalize.sh
that are provided via ConfigMap in Kubernetes environments. Since theJIBRI_FINALIZE_RECORDING_SCRIPT_PATH
environment variable correctly points to the script path, it would be ideal to ensure that the script can be executed without needing workarounds that do not function properly.System
Additional Context The purpose of my
finalize.sh
script is to upload recorded video files to Azure Blob Storage. ConfigMap volumes are mounted as read-only in Kubernetes, which prevents modifications or executions from happening directly. An official solution or recommendation for handling this scenario would be very helpful, especially when dealing with scripts that are crucial to the proper functioning of Jibri, such asfinalize.sh
.