Closed thewinterwind closed 7 years ago
Is it possible that insert quieries are not written immediately but with some latency?
yes but not much. I am not caching config.
You probably need to cache config. That fixes most issues I see like this.
I'm using config:cache but I still see this issue every day or so.
We still relay on dotenv
to populate server variables, so this might be the issue.
Why do you think it is a config cache issue? I guess there may be some latency between sending response (with session ID) and app termination (when session is being saved to databse).
For example:
Otherwise it may be database INSERT query latency so the second request executes SELECT before insert finishes.
Happened to me now after switched to database driver for session .. any idea?
Thanks
ping @taylorotwell
I am experiencing the issue as well, using the default session build with the DB.
:+1:
@GrahamCampbell why close this ticket, seems like lots of people are having issues with this problem and continue to have this problem. I think the issue lies in the database driver -> save method.
This needs reopening.
On Jan 28, 2016, 5:18 PM -0600, Dean Mnotifications@github.com, wrote:
@GrahamCampbell(https://github.com/GrahamCampbell)why close this ticket, seems like lots of people are having issues with this problem and continue to have this problem. I think the issue lies in the database driver ->save method.
— Reply to this email directly orview it on GitHub(https://github.com/laravel/framework/issues/9251#issuecomment-176477099).
Does anyone have a solution?
On Jan 28, 2016, 5:18 PM -0600, Dean Mnotifications@github.com, wrote:
@GrahamCampbell(https://github.com/GrahamCampbell)why close this ticket, seems like lots of people are having issues with this problem and continue to have this problem. I think the issue lies in the database driver ->save method.
— Reply to this email directly orview it on GitHub(https://github.com/laravel/framework/issues/9251#issuecomment-176477099).
Deployed this fix to my environment today. I will let you know if that resolves the issue.
@taylorotwell @GrahamCampbell
Laravel version 5.2.37
I've seen a number of these session integrity issues show up on laracasts, stackoverflow, etc. I ran into the same issue today and able to replicate to some degree. It seems to a session DB related issue vs. session file. To be honest it is not really a bug but a dev issue that can be solved with middleware. That said given the norm of multi device it might be something you want to look into, but unsure if even that applies.
REPLICATE: Let’s say you have a home.blade.php and second page called register.blade.php a route ‘/register’ which yields content from home. If you load /register from lets say from a MAMP stack everything will work fine as expected. Now while you still have that session, open a new browser tab. Make a copy of home.blade.php , lets say debug.blade.php which again yields to the same register.blade.php a route ‘/register’ and now if you load /register it will load register and will create the duplicate session id. Now if you go back to original tab and attempt to load register or you will get a DB session integrity duplicate issue. Which is why a number of bug complaints are login / logout related as I suspect is because they are directing to a different (duplicate) home page while still using the same session. Unsure what is causing the duplicate issue i.e if it is csrf token / session / DB session / home page issue.
Hope his helps.
Any solution for this Bug?
:+1:
Any solution for this Bug?
Not in 3 hours.
fwiw, I noticed I can also replicate it again by breaking a blade template load during ajax requests. For example if I force a blade template fails (e.g. wrong name) while I'm doing an ajax init() data load on vanilla route.
Route::get('/register', [
'as' => 'getRegister',
'uses' => 'ApiController@getRegister',
]);
been trying to debug it further but none of angular / testing tools are giving me any additional insight. I do believe that to @rkgrep summary above that there is a session / queue issue / conflict and might be ajax related. In my case I have a number of angular services init ajax http requests and if the template fails i.e. exception and then attempt to reload . It returns the unencoded JSON with a 200 status code.
javascript?v=1453465343:2050 Uncaught SyntaxError: Unexpected token <
followed by a
angular.js:13642 SyntaxError: Unexpected token < in JSON at position 11625.
the position 11625 is where the html document gets appended to the JSON data.
I think it is worth nothing that I get the following exception if it detects an existing session record, but I get the above silent fail if it has 2 existing session records i.e somehow the DB has written 2 identical session records and if it attempts to write a 3rd it fails silently and sends back JSON data but unencoded.
Next exception 'Illuminate\Database\QueryException' with message 'SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry 'a26e09eaf0a210a34e58ebab42acd9435681e0a5' for key 'sessions_id_unique' (SQL: insert into
sessions
(payload
, last_activity
, user_id
, ip_address
, user_agent
, id
) values (YTo0OntzOjY6Il90b2tlbiI7czo0MDoiOVE3M3ZLam12YlpFVEhLNDBSOVAwMDdIZWpKeGY0YUFNQVZDRTFJUCI7czo5OiJfcHJldmlvdXMiO2E6MTp7czozOiJ1cmwiO3M6MzI6Imh0dHA6Ly93d3cuY21vOjg4ODgvYXBpL3BlcnNvbmFzIjt9czo5OiJfc2YyX21ldGEiO2E6Mzp7czoxOiJ1IjtpOjE0NjU5MjE0ODU7czoxOiJjIjtpOjE0NjU5MjE0ODU7czoxOiJsIjtzOjE6IjAiO31zOjU6ImZsYXNoIjthOjI6e3M6Mzoib2xkIjthOjA6e31zOjM6Im5ldyI7YTowOnt9fX0=, 1465921486, , ::1, Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36, a26e09eaf0a210a34e58ebab42acd9435681e0a5))' in /Users/nolros/Documents/cmo/vendor/laravel/framework/src/Illuminate/Database/Connection.php:713
Stack trace:
here is an example of the ajax request
Request URL:http://www.cmo:8888/api/personas
Request Method:GET
Status Code:200 OK
Remote Address:[::1]:8888
Response Headers
view source
Cache-Control:no-cache
Connection:Keep-Alive
Content-Type:application/json
Date:Tue, 14 Jun 2016 16:24:45 GMT
Keep-Alive:timeout=5, max=96
phpdebugbar-id:b73cf0e58e0f47fd49857463540cbaf3
Server:Apache
Set-Cookie:laravelsession=a26e09eaf0a210a34e58ebab42acd9435681e0a5; path=/; httponly
Set - Cookie:XSRF - TOKEN = eyJpdiI6Ijd2Y0pmN3JCSWRTNGE5dzFnT1FSSFE9PSIsInZhbHVlIjoiNWFubHNNYTVLeGRRc1B4RW9zMGNBQ3NVTFpDeTUrVkdzZWpWWjBRXC9scDlRejQ0TnpwNHQzV3BrUkJlamNUSm5jZjFudmFCM1VLb0dlUEhERHVPN0xRPT0iLCJtYWMiOiJjNTYxZDc3NmM0ZTI5M2MzNGI2ZmM3MDIxZGFjMDZiMzc1MGI1OTM4YWExOGI5NzczNTUzMzJiMmE3ZDkxZmRjIn0 % 3D; expires = Tue, 14 - Jun - 2016 18:24:46 GMT; Max - Age = 7200; path =/
Transfer-Encoding:chunked
X-Powered-By:PHP/5.6.10
Request Headers
view source
Accept:application/json, text/plain, /_
Accept-Encoding:gzip, deflate, sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:keep-alive
Cookie:cmosolution=eyJpdiI6IlBIYUhCQzJYUUxkMFwvZ0x2RmJraFhnPT0iLCJ2YWx1ZSI6IjVqaXA2ak9WN3laaHdVNkVtRVRPbVJOVThoRTVGV2E0K1RiTHdIcFQ4eGNCTU9nYjRnUnhVWEduODJvS1wvVW15QUlcL1ZyWlp0Z3o0MVhGem1rUzZPZjlLNEY5VTF3eW4zckE0ZXNHTHRCZXlOVVRDWkFoS3NlNHlBRVR3Y3IwdFkiLCJtYWMiOiI1NDcxNmRiZTBiN2NiMmQ5ODcwYTMyZDcxY2ZmYzFkMGU1NmRiNTc0MDAxNTBiMTVhZjNlZjJkMTljNjIyYmFlIn0%3D; remember_web_59ba36addc2b2f9401580f014c7f58ea4e30989d=eyJpdiI6Im94R1B6NFwvUnpMQUp2M0VuTDN3cFlnPT0iLCJ2YWx1ZSI6IjNJZUVCSnFhR1IzZ0tVaVNTd3hUb1NcL2FIcHFcL1pJaE9FMGRIc2hQV1NDZzBycmVxN2xUb0s3aUdjWnpRaDFCWlwveHhCXC8xZG1wOE5WdnJpc0RYK094UGVMWDJ1cjNiSFE0R2tMM2VhRkJuVT0iLCJtYWMiOiJhMjMzYTFlNmM2MWIxNDJlNDQwZDJiZTNhNjhhNWFhNzRlZThiYjNhNjc1NDhiY2MxNjI0MTc2MGY3MDgwYWQxIn0%3D; XSRF-TOKEN=eyJpdiI6IlZ5bHZNNlVXM0ZqM0tWd0dheUtJUEE9PSIsInZhbHVlIjoieGdyYzRtOUFHaHptMGVYbUw1ZzNIVU5hTjQ5cDM1QWljNHBWazVWZVVZdXdBRnl1YWFpc1UzVE53Q0tlbGZLQUxVazdLcWZ0enZxWG5vUURGZHVcL2RnPT0iLCJtYWMiOiI3NDIxNDJmN2Q3NWYxYWE2MTY0NDc4MGM4ZWNkMzFiNDY3OWE3ZWExMmNiMDEzZDNiYjU1Njg4NGQ5NzJhZmMxIn0%3D; laravel_session=a26e09eaf0a210a34e58ebab42acd9435681e0a5
Host:www.cmo:8888
Pragma:no-cache
Referer:http://www.cmo:8888/register
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36
X-XSRF-TOKEN:eyJpdiI6IlZ5bHZNNlVXM0ZqM0tWd0dheUtJUEE9PSIsInZhbHVlIjoieGdyYzRtOUFHaHptMGVYbUw1ZzNIVU5hTjQ5cDM1QWljNHBWazVWZVVZdXdBRnl1YWFpc1UzVE53Q0tlbGZLQUxVazdLcWZ0enZxWG5vUURGZHVcL2RnPT0iLCJtYWMiOiI3NDIxNDJmN2Q3NWYxYWE2MTY0NDc4MGM4ZWNkMzFiNDY3OWE3ZWExMmNiMDEzZDNiYjU1Njg4NGQ5NzJhZmMxIn0=
The issue is resolved if I delete the DB session. This probably doesn't help much but I thought I would share anyway.
Any solution?
Is it working on the very latest version?
5.1.*
Please provide the exact version.
Laravel Framework version 5.1.39 (LTS)
@GrahamCampbell fails in version 5.2.37 but no problem in version 5.2.38. I've run all tests and tried all scenarios and pleased to report no problem. Thank you!
Great. ;)
But, if it's still broken on 5.1?
Version 5.1.39 does not work
I made the same change to 5.1 as to 5.2. If there is still a problem please someone make a PR. I am on vacation.
Version 5.1.39 does not work
Someone make a PR to fix please.
I think this was a recent change/update. Have been working on something for a client for months now, and only after a recent composer update did this issue starting showing... Any chance this is a dependency problem?
Edit: For reference, this is happening at least twice a day on a local testbed running MySQL 5.7 and PHP 5.6.19. Switching the server to use 7.0.1 resolves the issue for me.
Edit 2: I could be wrong - was silly and did an update at the same time, so can't confirm what made it work without clearing anything.
They never think to fix this, I'm in version 5.1.39 or 5.1.40 does not work
?
@tuproyecto If you'd like to fix this in a PR, that would be appreciated.
I'm having the same issues. For me, the session is expiring after 2 hours, but when I go back to login it uses the same session. However, it appears the session id is not regenerating after the session has timed out. To fix this or any similar issues you can do something like this before you perform your login authentication checks.
$count = DB::table('sessions')->where('id', Session::getId())->count();
if ($count > 0) $request->session()->regenerate();
I haven't tested this yet but this seems to be the problem. Let me know if this helps anyone or its not working.
I'm trying to understand the problem since it is occurring like crazy on one of my sites but not the others. On that particular domain, we have implemented an SSO login, so the process looks like this:
I can not reproduce the error myself, but my bug reporting system is throwing this error at me 20+ times a day. Since it doesn't happen regularly, it seems that it's potentially a browser issue, which got me thinking... Tell me if I'm being stupid here, but is this possible:
Is this plausible? I can dig into the framework code to see if this is possible, but I figured someone (Graham or Taylor) might know right away if this is even a feasible explanation/scenario.
The only other explanation I can fathom is that sometimes the loop back is too fast (which seems highly, highly unlikely) and starts a second request before the first one inserts, resulting in a duplicate session that has a false value for $this->exists
.
https://github.com/laravel/framework/blob/5.1/src/Illuminate/Session/DatabaseSessionHandler.php#L95
If $this->exists
is not going to be reliable (for whatever reason), then the obvious fix is to query the database ahead of insert/update. I would be happy to fork and create a pull request for this, but maybe there's a better option... ?
I kept digging, and this -- https://github.com/laravel/framework/blob/5.1/src/Illuminate/Session/Store.php#L195 -- appears to suggest that my "second request" theory isn't realistic, but that would mean that the original request is somehow trying to insert the session twice.
Any ideas how that could happen?
Here's my last theory before I hang up and listen...
(My previously mentioned scenario in which the same session is saved twice is also unlikely since the first save event sets ->exists
to true
which would trigger updates instead of inserts on future ->write()
calls.)
generateSessionId() should never generate a duplicate ID. This means that the "duplicate" entry is either (1) being read from the database before a ->setExists(false)
is called or (2) being spawned from the original on the same request. It basically eliminates my previous scenarios as possibilities.
Scenario 1 seems unlikely since ->setExists(false)
is only called during ->migrate()
which also calls ->generateSessionId()
which should prevent duplicate IDs. (https://github.com/laravel/framework/blob/5.1/src/Illuminate/Session/Store.php#L226)
That leaves the odd scenario where two session objects with the same ID are being saved for a single request. The catch is that the second object has to be a copy of the first because of ->generateSessionId()
. This suggests that somewhere there's a "branch" in the code or a left-hand assignment that effectively creates a duplicate session.
This would be a valid explanation -- say if somehow SessionStart->terminate()
were to be called twice -- except that it appears to happen "randomly".
That's about all I can offer since I still can't track down why it happens sometimes and not others. Hopefully some part of my digging and rambling will help someone else figure this out. If anyone else has ideas, I'm willing to look into them.
@theTrip73 I've told something like that a long time ago but nobody paid attention )) I still think it may be the main reason
UPDATE: My errors appear to be coming exclusively from the following user-agent (info I just added to my error reports): "Rackspace Monitoring/1.1 (https://monitoring.api.rackspacecloud.com)". This explains why no human has complained to me about this.
I can't imagine what the Rackspace crawler is doing to cause this though, and I suppose it's still technically a bug... ?
EDIT: The monitoring service checks a designated URL every 60 seconds to monitor uptime, but I only receive a few error notices each day, except for a cluster of ~10 between 3:00 AM and 4:00 AM each night (during server backup/maintenance routines). This seems to support the server latency idea, but I still can't figure out the exact mechanism that allows/causes a second insert attempt.
I'm on 5.2.39 and I also have this issue, I had to migrate to cache server to get normal status of my app.
I am running 5.2.39 as well using reddis for cache and using database for sessions as i have 3 servers running production, all on forge, all using envoyer and i still see this issue now and again. I believe it is because of write delay on the database. This only happens when i start loading up the database to around 80% CPU capacity on amazon RDS.
I see this and am just following this chain of comments. I'm pretty novice when it comes to where to store session stuff, would it make sense to move it to redis instead of mysql? (my stuff is similar to @chrisvickers but on aws and everything in a rds / mysql shared db)
Just received this error in slack from one of my apps. If there is no PR in the pipeline, might have a look at this myself.
Hi All, I think we need to solved this problem or is there any suggestion where to store session?
I have an idea, but I don't know how to test if it works. How about if we use openssl_random_pseudo_bytes(25)
instead of Str::random(25)
protected function generateSessionId()
{
return sha1(uniqid('', true).openssl_random_pseudo_bytes(25).microtime(true));
}
I have an idea, but I don't know how to test if it works. How about if we use openssl_random_pseudo_bytes(25) instead of Str::random(25)
Really bad idea.
Taylor merged #12059 into 5.2 which would also help reduce the read/write race condition in the 5.1 database session handler.
PROPOSAL: A try-catch for the DatabaseSessionHandler::write() method's insert attempt to watch for PDOExceptions and check PDO::errorCode() for SQLSTATE 23505 (Unique violation: 7).
If 23505, read + update. If not 23505, re-throw.
It's an extremely targeted strike, but it should narrow the race condition to the smallest possible timeframe while also avoiding an overly general approach that could cause problems elsewhere.
EDIT: It would be very similar to this solution that I found after proposing this -- https://github.com/laravel/framework/pull/12059#issuecomment-190328770
I'm using separate read and write hosts on the db connection used to handle sessions, so it's entirely possible that the read server hasn't been updated with the session data written to the write server. I've never encountered replication lag (using RDS) but to eliminate this as a potential cause, I've created a new connection which has a single read/write host.
If I get no further duplicate errors in the next week or so, I'll be happy to say this is related to some sort of race condition caused by read/write replication.
Unfortunately I'm still seeing this error, so I can rule out that having separate read/write connections is not the cause of this bug.
Experienced this in 5.0 and 5.1
MySQL version: 5.6 Forge provisioned server.
We use Galera Cluster for MySQL, but each site reads and writes from only one database. I don't think that would have anything to do with this error. I used
artisan session:table
to create the table so everything is standard schema-wise. Happens quite intermittently, about once per day.