TheOpenSpaceProgram / osp-magnum

A spaceship game
https://www.openspaceprogram.org/
MIT License
216 stars 32 forks source link

Tasks and more II #244

Closed Capital-Asterisk closed 1 year ago

Capital-Asterisk commented 1 year ago

this took a while. If I remember correctly, trying to smash together some of the new universe stuff caused the duct tape to fall off of the tagged task system, so a new one was needed.

New task system:

Example:

before:

shapeSpawn.task() = rBuilder.task().assign({tgSceneEvt, tgSpawnReq, tgSpawnEntReq, tgTransformNew, tgHierNew}).data(
        "Add hierarchy and transform to spawned shapes",
        TopDataIds_t{           idBasic,             idActiveIds,              idSpawner,             idSpawnerEnts },
        wrap_args([] (ACtxBasic& rBasic, ActiveReg_t& rActiveIds, SpawnerVec_t& rSpawner, EntVector_t& rSpawnerEnts) noexcept
{
    // ...
}));

translation: Task triggered when scene event is called tgSceneEvt. The tgSpawnReq tag means that there are other tasks with tags like tgSpawnMod that must be done before running this task. Similar thing with the other tags.

The wrap_args function adapts the lambda's function signature to accept an array of 'any' containers as arguments, since all application data (named TopData) is stored as one big vector of 'any's. idBasic, idActiveIds, ... are indices to the vector of 'any's.

Tags have dependencies on each other; assigning these can get confusing, especially if loops are involved. Dependencies on loops just don't work. There's also the problem of 'optional branches' or conditions of when tasks are able to run that makes things more confusing. There's also quite a bit of ugly linear-search stuff in there that are not particularly fast.

after:

rBuilder.task()
    .name       ("Add hierarchy and transform to spawned shapes")
    .run_on     ({tgShSp.spawnRequest(UseOrRun)})
    .sync_with  ({tgShSp.spawnedEnts(UseOrRun), tgCS.hierarchy(New), tgCS.transform(New)})
    .push_to    (out.m_tasks)
    .args       ({      idBasic,                  idSpawner })
    .func([] (ACtxBasic& rBasic, ACtxShapeSpawner& rSpawner) noexcept
{
    // ...
});

translation: This task runs when the spawnRequest pipeline is on the UseOrRun stage. Task must run once spawnedEnts is on its UseOrRun stage, hierarchy is on its New stage, and transform is on its New stage.

Each 'Pipeline' is a sequence of 'Stages' that are each run one-by-one. Tasks are set to run when a pipeline reaches a certain stage, and can be 'synchronized' to block and only run when other pipelines reach a certain stage. Pipelines can only advance to the next stage when all the tasks running on them are done, and all tasks synchronized with them are done too.

Pipelines can be parented to each other, forming a tree of pipelines. This allows supporting things like having a loop, where pipelines within the loop can sync with each other, and also properly handle cases of these pipelines depending on things outside of the 'loop scope' and vise versa. This gets confusing but the new system is capable of handling it.

Pipeline stages are defined as enums:

/**
 * @brief Continuous Containers, data that persists and is modified over time
 */
enum class EStgCont : uint8_t
{
    Prev,
    ///< Previous state of container

    Delete,
    ///< Remove elements from a container or mark them for deletion. This often involves reading
    ///< a set of elements to delete. This is run first since it leaves empty spaces for new
    ///< elements to fill directly after

    New,
    ///< Add new elements. Potentially resize the container to fit more elements

    Modify,
    ///< Modify existing elements

    Ready
    ///< Container is ready to use
};

The pipeline stuff is similar to having mutexes and threads. One problem with mutexes, is that there's no way to know if a function is going to get blocked by a locked mutex without running the function, leading to a large number of blocked threads. Storing the 'blocking information' externally as pipelines/stages/tasks means that some executor code can very effectively coordinate which tasks can run in parallel using a fixed number of threads. It's also possible to analyze critical paths within the graph of pipelines/stages/tasks and assign priorities to tasks.

Other stuff

again, it's probably not worth trying to review the whole thing (or any at all). since...