mbr / flask-kvsession

A drop-in replacement for Flask's session handling using server-side sessions.
http://pythonhosted.org/Flask-KVSession/
MIT License
168 stars 53 forks source link

Bypass has_expired check when session store supports TimeToLive #37

Closed whacked closed 9 years ago

whacked commented 9 years ago

(1 line change)

motivation

(was hoping this would help visitors searching for this issue)

I'm using RedisStore and have trouble extending the session lifetime on visit.

Once PERMANENT_SESSION_LIFETIME is set for RedisStore, the data gets saved with SETEX so the expiration happens automatically. To my understanding, by design, if the store supports TTL, housekeeping functions like cleanup_sessions and even SessionID.has_expired can be ignored by the user for all practical purposes.

issue

The code in https://github.com/mbr/flask-kvsession/blob/master/flask_kvsession/__init__.py#L148-L152 does the has_expired check based on deserialization of KEY_CREATED, so we end up ignoring the TTL set in save_session in https://github.com/mbr/flask-kvsession/blob/master/flask_kvsession/__init__.py#L183-L186

proposed change

in open_session, check the store for TTL support prior to the has_expired check; if supported, bypass the has_expired check that relies on KEY_CREATED.

impact

No regressions, and repeated visits apply session lifetime extension according to PERMANENT_SESSION_LIFETIME, though this change feels a bit hacky. Is it sensible?

Thanks!

whacked commented 9 years ago

Yikes. request context is bork. BRB

EDIT: I don't see why it's failing atm. py.test -rs succeeds while tox fails, both on this branch and master.

mbr commented 9 years ago

Current implementation issues aside, this is a hard thing to check for me right now (I'd want at least an hour or two to think through all the scenarios and I'm a little short on time recently).

The issue I can think of is that time-to-live-support in the session backend is "write-only", for lack of a better word - it can influence the way sessions are cleaned up and can only cause sessions to live shorter than usual, but it should not have any impact on how verification is done or sessions are restored. Put simply, there's only mention of ttl_support in save_session(), not open_session().

For better understanding, could you give an example scenario where the new version performs better than the current one?

whacked commented 9 years ago

Ah, I can see it being a problem if TTL vs non-TTL behavior differs here, though currently this behavior doesn't seem to be explicitly documented in either case. I admit I did not consider any other case using any other backend.

Here's an example where the change makes intuitive sense.

Set PERMANENT_SESSION_LIFETIME to 1 minute. Effect is if you visit an endpoint at time t_0, start a session cookie, and don't visit for 1 minute, the server-side data is destroyed. On RedisStore this happens automatically due to SETEX. The server-side key simply vanishes, and a new cookie is issued on the second visit (after >1 minute).

If you revisit in e.g. 30 seconds, due to the ttl block in save_session(), RedisStore runs a SETEX on the session data to now() + app.permanent_session_lifetime, i.e. t_now + 60 = t_0 + 90 seconds. Thus if the user revisits at t_0 + 75 seconds, RedisStore thinks the data still has 15 seconds of life in it, but since has_expired section in open_session is unaware of the new expiration set by ttl, and will expire the session anyway before checking the store, no matter what you did in save_session. In the longest case, the session key will reside on server side for 2x the time as it does on client side (if you revisit immediately before it expires).

To keep open_session as-is, but ensure server-side key lifetime is similar to client side cookie lifetime, you could e.g. change in https://github.com/mbr/flask-kvsession/blob/9309f119bd7b52af97e0af8cdd30226e2fa39b33/flask_kvsession/__init__.py#L184-L187

            if getattr(store, 'ttl_support', False):
                # TTL is supported
                # OLD
                # ttl = current_app.permanent_session_lifetime.total_seconds()
                # NEW
                ttl_new = store.ttl(session.sid_s)
                store.put(session.sid_s, data, ttl_new)

Here, ttl_newttlCHECK portion of sid_s always.

Put another way, right now there is no method to extend the session lifespan (at least using the in-memory stores) server-side. The closest operation is session.regenerate but that reissues the entire cookie. With this change we are using the server side store as the point of truth: if the key vanished, the session expired, else it's valid. The 1-line check only ensures that behavior is preserved for non-TTL backends.

whacked commented 9 years ago

I have pushed a commit to my fork with expiration tests: https://github.com/whacked/flask-kvsession/commit/c2d972ec464ac36045750ef7e321e63ceee1f129

It's a single new test with a series of assertions testing for session / cookie lifecycle. Some may be superfluous, but the final assertion is what fails on master but passes on my skip-has_expire-if-TTL-support. So it depends on what kind of behavior you are expecting regarding client side cookie vs. server side session key.

The subscript notation is a bit ugly. We basically make a request, store the cookie, wait some time close to TTL, make a new request, ensure the cookie is the same (master and my fork are the same until here), wait some more time so we are surely beyond TTL, make a new request, and ensure the cookie is still the same. master issues a new one, my fork does not.

If you reject this PR hopefully you still find the test useful :smiley_cat:

mbr commented 9 years ago

Ah, now I see where you're coming from. That exact use-case is something I had in mind, but I am not comfortable yet implementing it in the way you have. I am not convinced I want to rely on redis to keep session lifetimes, since it is baked into the session id already - I'd like a single source of truth. Furthermore, this makes time-to-live support change the behavior from a security point of view, which is not what I want (sessions should be "portable" across backends).

My own, less elegant but easier to understand idea for solving this was calling regenerate() if the remaining session lifetime is < 0.5 total lifetime or something similar. Basically, err on the side of "losing" a session instead of making security more complicated. This also has the advantage of working across backends, in case someone really is using filesystem-based sessions.

Maybe yours is the way to go, however this is by no means an auto-include, sorry about that.

whacked commented 9 years ago

I understand your point about portability, and I don't have a clearly better method. I tried regenerate() to extend sessions at one point, but the extension based on current lifetime still leads to surprising behavior: for example if you extend it when remaining lifetime is < 0.5 total lifetime, and tell them "you will get logged out automatically if you don't visit in 30 days", and the user visits on day 15 at exactly 0.5 of total lifetime, the session isn't regenerated. They then may wrongly believe they have 30 more days to go.

I am guessing the security part is because you want to keep the lifetime logic invariant? Or is there some other reason? One problem is, once you want to reliably extend the session lifetime, for example by frequently issuing regenerate, you will be making more session writes and cookie exchanges than if you used a bypass. (not saying this PR fixes the issue either, since it's redis-specific)

I will do some more research and revisit, thanks.

mbr commented 9 years ago

I am guessing the security part is because you want to keep the lifetime logic invariant? Or is there some other reason? One problem is, once you want to reliably extend the session lifetime, for example by frequently issuing regenerate, you will be making more session writes and cookie exchanges than if you used a bypass. (not saying this PR fixes the issue either, since it's redis-specific)

It's not set in stone, but I haven't worked with that section of the code in quite some time. Any significant changes should be made carefully =).

I do remember that the idea that writing may be expensive was a factor though; especially if caching is involved. Although I have no numbers to back that up.