Closed louis030195 closed 3 weeks ago
Increase the amount of logging and debugging information in your code to accurately locate where the memory issue is. This will help in identifying specific sections that may require optimization.
try this :
jobs:
build:
runs-on: macos-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- name: Build project
run: cargo build --release
env:
RUSTFLAGS: "-C target-cpu=native"
V8_FLAGS: "--max-old-space-size=4096" # Adjust this value as needed
Increase the amount of logging and debugging information in your code to accurately locate where the memory issue is. This will help in identifying specific sections that may require optimization.
try this :
jobs: build: runs-on: macos-latest steps: - name: Checkout code uses: actions/checkout@v2 - name: Set up Rust uses: actions-rs/toolchain@v1 with: toolchain: stable override: true - name: Build project run: cargo build --release env: RUSTFLAGS: "-C target-cpu=native" V8_FLAGS: "--max-old-space-size=4096" # Adjust this value as needed
optimisation?
my JS code is this:
console.log("hi")
@liashood
same issue with the V8_FLAGS: "--max-old-space-size=4096"
whats RUSTFLAGS: "-C target-cpu=native"
? is it necessary?
@devsnek please take a look
This backtrace doesn't make much sense to me, CodeRange allocation is not part of setting up the stack. Maybe try a debug build to get a better backtrace?
This backtrace doesn't make much sense to me, CodeRange allocation is not part of setting up the stack. Maybe try a debug build to get a better backtrace?
does not crash in debug mode :(
so weird
/Applications/screenpipe.app/Contents/MacOS/screenpipe -> crash
~/Downloads/screenpipe -> works
Can the Fatal process out of memory: Failed to reserve virtual memory for CodeRange
be related to permission issue with file access? @devsnek
nvm its not the path, i ran the CI, ran, moved to different paths it works
i sign the CLI using apple license, although doubt this could be the issue but checking
actually im confused error does not happen on my friend computer
1 download build pre tauri + apple signing
2 run ~/Downloads/spp6
works
3 download build post tauri + apple signing
/Applications/screenpipe.app/Contents/MacOS/screenpipe
@devsnek are you available for a call in upcoming hours? doing a $300 OSS bounty for this, quite urgent
https://cal.com/louis030195/abcd
i bet we can solve it quick over a call
I don't know much about macOS code signing and I don't have a mac, so I can't really advise you further here.
not sure how but i fixed the issue
code https://github.com/mediar-ai/screenpipe/blob/main/screenpipe-core/src/pipes.rs
this is weird
when i build locally
cargo build --release
it works, don't have this issue, but when i build in github ci, macos runner, the binary crash with this error when i use my deno featureshttps://github.com/mediar-ai/screenpipe/blob/main/.github/workflows/release-app.yml https://github.com/mediar-ai/screenpipe/blob/4ea365ad2655343c350e6dd2159d287c820e0699/.github/workflows/release-app.yml#L154
only on macos, works fine on windows
do you know anything i could change?
tried to bump the version without success
claude
here are 10 possible reasons for this issue, focusing on the differences between your local build and the github ci macos runner:
memory constraints: the ci runner might have less available memory than your local machine, causing the out of memory error.
deno version mismatch: there could be a version incompatibility between deno_core and other dependencies in the ci environment.
macos version differences: the macos version on the runner might be different from your local machine, affecting memory management.
cargo caching issues: improper caching of dependencies in the ci pipeline could lead to inconsistent builds.
compiler optimizations: different optimization levels between local and ci builds might affect memory usage.
environment variables: missing or different environment variables in the ci could impact the build process.
third-party library conflicts: interactions between deno and other libraries might behave differently in the ci environment.
architecture differences: subtle differences in cpu architecture between your local machine and the runner could affect memory allocation.
ci-specific limitations: github actions might impose resource limits that aren't present locally.
dependency resolution: the ci might resolve dependencies slightly differently, pulling in versions that conflict in unexpected ways.
to troubleshoot, you could try: