roadrunner-server / roadrunner

🤯 High-performance PHP application server, process manager written in Go and powered with plugins
https://docs.roadrunner.dev
MIT License
7.92k stars 411 forks source link

[💡FEATURE REQUEST]: Per-plugin server (command) #916

Closed rustatian closed 2 years ago

rustatian commented 3 years ago

At the moment, for the RR2 is impossible to run php workers per plugin with its own command, all plugins use the same server.command to fire a PHP process. User is forced to use multiply RR instances to run different types of workers if needed to separate http worker from the jobs for example. To use both types of messages in one worker, the user should write worker.php with a bunch of conditionals like if http ... else if jobs ... else if temporal. It's harder to maintain and read such a kind of worker's code. One big worker.php will be responsible for the many types of responses rather than have smaller workers with their own domain responsibilities. It's also impossible to use different users/groups or even relays for the different plugins.

My proposal is to include an optional server section into the every plugin which may use workers to have precise control over the plugin worker allocations as well as the elimination of supervisor section and merge its options to the pool options.

http:
    address: 127.0.0.1:8080
    max_request_size: 256
    middleware: ["headers", "gzip"]
    trusted_subnets: []

    server:
        command: "php psr-worker.php"
        user: "www"
        group: ""
        env:
            - SOME_KEY: "SOME_VALUE"
            - SOME_KEY2: "SOME_VALUE2"
        relay: pipes
        relay_timeout: 60s

    # Workers pool settings.     
    pool:           
        debug: false
        num_workers: 0
        max_jobs: 64
        allocate_timeout: 60s
        destroy_timeout: 60s
        watch_tick: 1s
        ttl: 0s
        idle_ttl: 10s
        max_worker_memory: 128
        exec_ttl: 60s     
OO00O0O commented 3 years ago

Maybe I'm wrong, but doesn't it makes you to have multiple "kernels" rather one? I had doubts about this functionality in general, but considering that you have only one point of entry - it makes very compact kernel.

Actually, I would love to get Task and Request over same loop and use same workers for all jobs and requests. It really does make sense in Symfony when you have same DI container everywhere. Only problem I see is strict resource control.

WS plugin already send authorisation request to http worker over attributes - could be Job too.

rustatian commented 3 years ago

Hey @OO00O0O, thanks for your feedback. Actually, this is not kernels, RR has the abstraction called server (might be not a good name, because I heard that PHP devs call this part kernel). Per-plugin server will only configure the server for the plugin needs w/o redeclaring or copying server. Any plugin can ask server for the worker or workers-pool providing only configuration. At the moment, the configuration stays inside the server plugin. This RFC is about extending other plugins with the pool configuration. So, for example, http plugin may allocate 20 workers and jobs - 100 because http has only 100 req/s load, but jobs - 10k req/s (just for the sake of example). At the moment, this is impossible, because the server plugin can be configured only with its own configuration section. At least, this is not a BC, and for users who don't use this feature, everything stays like it was previously.

But, what is more important, for sure, I don't want to get this feature to the RR without discussion with the community. Because I rather improve RR perf, than implement a useless feature 😃

OO00O0O commented 3 years ago

A lot of devs call it kernel, because most devs have single point of entry handle(Request $request): Response(aka SymfonyKernel). And symfony uses same kernel for console applications.

RR could append mode to packet, rather to worker env(RR_MODE). Therefor reuse same kernel resources: handle(Request|Task $object): null|Response. In symfony only different mappings would be used for each object, but at the end DI engine does final call.

rustatian commented 3 years ago

RR could append mode to packet, rather to worker env(RR_MODE). Therefor reuse same kernel resources: handle(Request|Task $object): null|Response. In symfony only different mappings would be used for each object, but at the end DI engine does final call.

I need to ask our PHP team about appending mode to the our protocol. From the protocol POV, this is possible, I designed it keeping in mind that we might use the Options, so, let's see.

OO00O0O commented 3 years ago

I always thought that RR should handle all magic around generic protocols and processes and give PHP dev only end abstraction and have single worker pool:

Temporal ------------\
HTTPRequest ----------\
WebSocketMessage ------> RR (parse, pack) -> Single big worker pool -( if needs response )- > RR 
AuthRequest ----------/
RPCInterface --------/

PHP Worker would: $packet = $pool->waitPacket().

Anyway, I have some time now and I will try this on Rust.

rustatian commented 3 years ago

What exactly do you want to try in Rust?

wolfy-j commented 3 years ago

Please note that single pools are prone to oversaturation of worker workloads, so we most likely will end with some capacity segregation anyway.

wolfy-j commented 3 years ago

I'm open to discussing it in our discord in voice. :)

OO00O0O commented 3 years ago

I will try to implement(find good cargo) WebSocket and Http server like RR does, but with single php worker pool. Therefor I could handle websocket messages too. It would allow to have master/slave configuration. Where master server accepts connections from slave server and this way extending worker pool with remote workers.

Of course I see this in symfony context, where DI container is required at all times.

rustatian commented 3 years ago

I will try to implement(find good cargo) WebSocket and Http server like RR does, but with single php worker pool. Therefor I could handle websocket messages too. It would allow to have master/slave configuration. Where master server accepts connections from slave server and this way extending worker pool with remote workers.

Of course I see this in symfony context, where DI container is required at all times.

You have to implement the goridge protocol first (if you want to communicate with RR) or you own protocol. Btw, I implemented this in Rust already (private repo at the moment). For the http you may use tokio, for the WS you can also use tokio (or mio, or stdlib). But, there is no perf benefit in implementing this in Rust, because you work with PHP workers :)

OO00O0O commented 3 years ago

@wolfy-j I could not find your Discord, mate. Not public? No "roadrunner" or "spiral" keyword?

@rustatian I just want to better understand interprocess communication and POSIX. So Go, Rust or C. At the end of the day all protocol heavy lifting is done by protobuf. PHP workers are already very fast with RR, because all hard work was done before request. If we consider PHP as DSL(see.: JetBrains MPS) and move all unnecessary work to server, it could get even faster. Most times even not call worker at all(getting cache value from kv and sending it to response rather calling worker).

rustatian commented 3 years ago

https://discord.gg/TFeEmCs

rustatian commented 3 years ago

I just want to better understand interprocess communication and POSIX

You may have a look at the goridge shared memory system-v implementation (with sema) -> https://github.com/spiral/goridge/blob/master/pkg/shared_memory/posix/posix_shm.go

Other implementations are very simple since pipes are just writing to the process fd like /dev/stderr.