Closed 0xLucqs closed 1 year ago
Hey guys, what are the current bottlenecks? purely CPU cores?
I can suggest one easy path (requires £ and access to gh beta managed runners, like for sayajin-labs) -> GH managed runners
One hard path -> custom self hosted runners on aws, also very expensive and cumbersome to maintain
Thanks we'll do that you regale as always 💯
Application to GitHub managed runners is underway. We are on the waiting list, hopefully, we'll get an answer soon.
Tried to setup a self hosted runner.
The AWS machine is running and correctly configured.
However when trying to run the workflow with this self hosted runner, the CI fails.
First, I got issues with environment variables and specifically PATH getting overridden (seems to me by actions/checkout@v3
step). The result was that rustup command was not found and the step rustup show
was failing.
As a workaround, I tried adding source $HOME/.profile
as a first step before rustup show
.
When doing so, this step works but then it fails on the cargo-llvm-cov
.
Here is the latest attempt file I tried:
name: Check, Build & Tests
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [main]
pull_request:
branches: [main]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
lint:
runs-on: [self-hosted, madara]
steps:
- uses: actions/checkout@v3
- name: Setup rust toolchain
run: |
source $HOME/.profile
rustup show
- name: Set-Up
run: |
sudo apt-get update
sudo apt-get install -y clang llvm libudev-dev protobuf-compiler
- uses: Swatinem/rust-cache@v2
- name: Format and clippy
run: |
cargo fmt --all -- --check
cargo clippy --all -- -D warnings
cargo clippy --tests -- -D warnings
coverage:
runs-on: [self-hosted, madara]
steps:
- uses: actions/checkout@v3
- name: Setup rust toolchain
run: |
source $HOME/.profile
rustup show
- name: Set-Up
run: |
sudo apt-get update
sudo apt-get install -y clang llvm libudev-dev protobuf-compiler
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- uses: Swatinem/rust-cache@v2
- name: Coverage
run: cargo llvm-cov --codecov --output-path codecov.json
- name: Upload coverage to codecov.io
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: codecov.json
fail_ci_if_error: true
integration-tests:
runs-on: [self-hosted, madara]
env:
BINARY_PATH: ../target/debug/madara
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: ">=20"
cache: "npm"
cache-dependency-path: ./tests/package-lock.json
- name: Install
run: |-
cd tests
npm install
- name: Setup rust toolchain
run: |
source $HOME/.profile
rustup show
- name: Set-Up
run: |
sudo apt-get update
sudo apt-get install -y clang llvm libudev-dev protobuf-compiler
- uses: Swatinem/rust-cache@v2
- run: cargo build --workspace
- name: Run test
run: |-
cd tests
npm run test
cc @drspacemn @LucasLvy
Okay so it looks like what happened:
I killed that and restarted the service and our path looks good now.
One questions is. Do we want to run multiple runners on this machine so we can build tasks in parallel or are we going to have multiple runner nodes?
kdo