Open rodmcnew opened 8 years ago
@rodmcnew You're right when guessing it's because you're trying to access the same file. This was done to make sure you don't accidentally corrupt a file. This should not happen a lot, unless you deploy to high-traffic active nodes. Perhaps then they want to write to files you just locked for your release.
Our release process doesn't do anything strange like locking files. I believe the only effect our releases have is that they delete the asset manager file cache. We get errors in our logs from this during at least 50% of releases. Its hard to image that anything else is the issue other than 2 threads writing the same file at the same time.
One solution could be to change this
$tmpFilePath = $cacheDir . '/AssetManagerFilePathCache_' . $fileName;
To something like:
$tmpFilePath = $cacheDir . '/AssetManagerFilePathCache_' . rand(0, 1000000000) . '_' . $fileName;
I'm going to try this on our local copy for a while and see if it makes the errors go away.
That's what I said, if you clear the cache on an active node (meaning, with traffic), and don't have 1 process warming it up somewhere offline, you might get errors. I still think that's appropriate behavior.
This is just a guess obviously, I don't know how / where you guys do releases.
In my opinion, there is no point in having a cache if it has to be pre-warmed offline. That sounds more like a build process than a cache.
I would guess that at least 80% developers are not going to have a complex release process that does offline cache pre-warming, especially in the PHP world.
@rodmcnew I doubt it, too. But if you're going to resolve assets using php on high traffic sites, you really should. If two or more people try to access the exact same resource at the exact same time, and it's not there yet, it's going to break stuff. I'd suggest to either host assets statically, or implement a warm-up strategy.
The caching makes sense. The only alternative I see, is listening for exceptions and retrying the result (maybe redirect to self) when you catch one, to keep retrying until it works. This is icky though. Another solution (if you don't want to go static) is to use varnish. Varnish is amazing at this, but renders the caching mechanism useless (as it gives you a whole new level of control).
Thanks. I believe adding randomness to the $tmpFilePath as in the above comment will fix the issue in a simple manner. I did this 10 days ago and haven't seen any exceptions since. If I don't get any exceptions in a month and I still remember, I will PR it.
@rodmcnew If this solves it for you, I see no problem in merging that. It won't hurt others. :)
@rodmcnew Funny to see this error. I put that lock in to fix your release problems. Glad you found another fix to fix the new issue.
@wshafer I have found that adding randomness to $tmpFilePath as I suggested has not eliminated this issue. We now get errors from "!is_writable($cacheDir)" during releases. I'm not sure what causes it though. Our release process looks like the following. PHP is stopped while files are being moved around. This happens to at least one file in about 20% of our releases.
service php5-fpm stop
mv /www/app/current /www/app/current.bak
mv /www/app/deploying-temp /www/app/current
service php5-fpm start
Example message from the "!is_writable($cacheDir)" line:
Unable to write file /www/app/current/config/autoload/../../public/modules/checkout/directive/preview-totals/preview-totals.html
One thing I've always disliked is throwing exception on cache misses. Would anyone object to caches failing silently? Or perhaps only failing if a debug flag is set?
My issue here is that there is absolutely nothing wrong with the asset itself. I feel like we should show the user the valid asset unless told otherwise.
incidentally, when I first put the PR in to do this that's what I was doing. And as you'd expect we didn't see the error @rodmcnew is seeing now.
Thoughts?
@RWOverdijk @rodmcnew @Ocramius
@wshafer I think the idea behind it is that you get to decide what happens yourself. If you don't want to throw exceptions, you catch them and ignore them. Others might want to catch them and do something else. Throwing them provides the most flexibility.
I get that except you can't catch these. You'd have to extend the Service Manager yourself and override the set cache and put your version in place of the main service.
I guess you could replace the eror handler to catch it but you don't have the assets in the exception to pass it through.
To keep the mind set and so errors don't have to go unnoticed, what if we made the exceptions a config option on the cache? Maybe a flag on the caches like 'ignoreErrors' like Monolog?
Update on record (based on our gitter conversation) we'll be skipping this for now.
@wshafer
PS. No fix for the cache issue we talked about earlier. No way to distinguish a cache exception from an asset exception. Gonna drop it for now
Almost every time we release we get errors from the following line for random css and js files. https://github.com/RWOverdijk/AssetManager/blob/master/src/AssetManager/Cache/FilePathCache.php#L94
I believe this happens because more than one person is asking for the same file at the same time. Perhaps two php threads are trying to write the same file at the same time.
I had to change it to this in our local version to prevent errors in our logs every release: