Closed klausmyrseth closed 4 years ago
I might have a culprint here, running Sanic with two processes, and now I have two entries randomly shifting when I list the context.
Thought the context was shared between the processes?
Hi @klausmyrseth You're absolutely right, this is caused by running Sanic with two process workers.
Currently the SanicContext objects are not shared between processes. I'll put a note in the Readme to make that clear.
This is something I'd like to fix in an upcoming release, but unfortunately that is a feature that would add a large latency overhead to shared context objects. It would ruin some of our performance.
For now the best thing you can do is put a middleware somewhere near the start of your request to check if your entry is present in the shared context, and if not, re-initialise it on that process.
Hello, @ashleysommer Can not SanicContext objects be shared between processes, yet?
Hi @jonra1993 No, sorry Sanic-Plugin-Context cannot share objects between processes yet.
However, this month I released Sanic-Synchro-CTX that is a new Sanic plugin that allows you to share context objects between worker processes (using either Redis as a server or using native Python Sync Objects).
That plugin is not a Sanic-Plugin-Toolkit plugin, and might not play well with Sanic-Plugin-Toolkit. However it does mean that I can probably merge that functionality into Sanic-Plugin-Toolkit in the next version.
Thanks @ashleysommer I am going to check the library you suggest
Env: Python 3.7.5, Sanic latest stable I am trying to use contextualize to use the context.shared to track clients to track some values that should not be reinitialized pr request, this gets my avg request time down from 60000 ms to 15 ms, awesome feature :D
But I noticed entries dropping from the context now and again, is there a ttl for the dictionary keys and if so how to I override this? I need to do my own cleanup to keep data sane.
I been trying to reliably reproduce it and currently I cannot put my finger on what makes the values drop but I have a scenario where I have a Server Send Event (sse) stream sending data posted to other endpoints through context.shared, its working as a charm to a point, but when the client reconnects the sse client or drops the connection the shared context tend to drop after a short while.
First I thought it was because i got context.get('shared', None), but after porting to context.shared I still get the same problem. I am trying to use the global shared context for contextualize.
It would be great to get some feedback on the issue as I would love to get this up and running properly.