o3de / ROSConDemo

A robotic fruit picking demo project for O3DE with ROS 2 Gem
Other
80 stars 23 forks source link

Editor Link step causes memory exhaustion with 16GB #258

Closed adamsj-ros closed 10 months ago

adamsj-ros commented 1 year ago

Using the prescribed build, it appears that the link step for Editor is exhausting my available memory. I noticed that CMAKE_JOBS are by default set to 8 but at the link step the system uses all 16 cores on my system. I've found in the past that the easiest way to manage memory exhaustion for a C/C++ build is to limit the number of cores for the various steps. In this case, where would be the best place to inject a limit to say 8 cores for the link step?

https://github.com/o3de/ROSConDemo/blame/91971cb3c8d98e157d1f48b4bf42271ee4cce523/README.md#L149

adamsj-ros commented 1 year ago

I also wonder if there is aw way to limit what I'm assuming are Gems added or whatever the 1405 build steps are. Any reduction to help make a build complete.

adamsj-ros commented 1 year ago

Looking at the docker file again, it appears the CMAKE_JOBS may be a misnomer. This is actually an option directly to Ninja to set the number of concurrent build jobs. Also looking around, it appears that the only way to reduce memory allocation is to reduce the -j (in our case CMAKE_JOBS) value. This won't be a hard limit depending on the part of the build that's running but it should help.

adamdbrw commented 1 year ago

Indeed, especially on lower-end machines, it could be useful to reduce concurrent build jobs. Regarding memory use, it is good to ensure sufficient swap space. Subsequent incremental builds are less demanding.

Note that the demo is mostly in the dormant state since the ROSCon 2022, but we aim to port to development branches, restore support and do some clean-up prior to ROSCon 2023.

spham-amzn commented 1 year ago

There are a few options here:

  1. For O3DE, there are a custom settings that we can use to control/lock down ninja settings: LY_PARALLEL_COMPILE_JOBS and LY_PARALLEL_LINK_JOBS
  2. As you mentioned, /j passed into the build command can also help
  3. If you don't want to modify the docker script, you could limit the number of available CPUs to the docker build instructions to limit the number of CPUs that is used during the container build process.
michalpelka commented 1 year ago

@adamsj-ros did you manage to resolve the issue with @spham-amzn recommendations?

michalpelka commented 12 months ago

It is possible that resources are exhausted by AssetProcessor, not compiler. You set max number of jobs here : https://github.com/o3de/o3de/blob/2bdadd250b7341e2e98cc222418fe6d108c1a10a/Registry/AssetProcessorPlatformConfig.setreg#L71-L74 I would recommend overriding this setting in project:
https://github.com/o3de/ROSConDemo/blob/development/Project/Registry/assetprocessor_settings.setreg

{
    "Amazon": {
        "AssetProcessor": {
            "Settings": {
                "Jobs": {
                    "minJobs": 1,
                    "maxJobs": 4
                },
                "ScanFolder Project/ShaderLib": {
                    "watch": "@PROJECTROOT@/ShaderLib",
                    "recursive": 1,
                    "order": 1
                },
                "ScanFolder Project/Shaders": {
                    "watch": "@PROJECTROOT@/Shaders",
                    "recurisve": 1,
                    "order": 2
                },
                "ScanFolder Project/Registry": {
                    "watch": "@PROJECTROOT@/Registry",
                    "recursive": 1,
                    "order": 3
                }
            }
        }
    }
}

Please adjust "maxJobs": 4 to a number that would not cause memory exhaustion.

michalpelka commented 10 months ago

I am closing the issue, please @adamsj-ros feel free to re-open if you still have a problem with resource exhaustion.