haiwen / seahub

The web end of seafile server.
seafile.com
Other
526 stars 368 forks source link

500 Internal Server Error when accesing files/history via Seahub #305

Closed devillemereuil closed 7 years ago

devillemereuil commented 9 years ago

Since a few versions (around 3.1.X), when accessing the history via Seahub on my private Seafile server, I encounter a "500 Internal server error". No change have been made on the Apache configuration since the "old" days of 3.0.

Some times, when trying hard, I can temporarily access the history/viewing the files, but eventually I'll go back to the 500 Internal Error issue.

Library is encrypted if that matters.

lins05 commented 9 years ago

The error would be recorded in seahub_django_requests.log, can you check it?

On Tue, Nov 18, 2014 at 10:04 PM, devillemereuil notifications@github.com wrote:

Since a few versions (around 3.1.X), when accessing the history via Seahub on my private Seafile server, I encounter a "500 Internal server error". No change have been made on the Apache configuration since the "old" days of 3.0.

Some times, when trying hard, I can temporarily access the history/viewing the files, but eventually I'll go back to the 500 Internal Error issue.

Library is encrypted if that matters.

— Reply to this email directly or view it on GitHub https://github.com/haiwen/seahub/issues/305.

devillemereuil commented 9 years ago

Whoo, this has to be the record of the quickest response ever! ;)

Here are the last lines in seahub_django_request.log (today's content): 2014-11-18 20:37:59,517 [WARNING] django.request:146 get_response Not Found:/f/e367C3C59a 2014-11-18 21:55:31,075 [WARNING] django.request:146 get_response Not Found: /favicon.ico 2014-11-18 21:55:31,623 [WARNING] django.request:146 get_response Not Found: /favicon.ico 2014-11-18 22:13:04,168 [ERROR] django.request:212 handle_uncaught_exception Internal Server Error: /ajax/unseen-notices-count/ Traceback (most recent call last): File "/home/seafile/seafile-server-3.1.7/seahub/thirdpart/Django-1.5.1-py2.6.egg/django/core/handlers/base.py", line 92, in get_response response = middleware_method(request) File "/home/seafile/seafile-server-3.1.7/seahub/seahub/base/middleware.py", line 22, in process_request username = request.user.username File "/home/seafile/seafile-server-3.1.7/seahub/seahub/auth/middleware.py", line 9, in get request._cached_user = get_user(request) File "/home/seafile/seafile-server-3.1.7/seahub/seahub/auth/init.py", line 115, in get_user user = backend.get_user(username) or AnonymousUser() File "/home/seafile/seafile-server-3.1.7/seahub/seahub/base/accounts.py", line 247, in get_user user = User.objects.get(email=username) File "/home/seafile/seafile-server-3.1.7/seahub/seahub/base/accounts.py", line 79, in get emailuser = ccnet_threaded_rpc.get_emailuser(email) File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/pysearpc/client.py", line 110, in newfunc ret_str = self.call_remote_func_sync(fcall_str) File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/ccnet/rpc.py", line 71, in call_remote_func_sync client = self.pool.get_client() File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/ccnet/pool.py", line 27, in get_client client = self._create_client() File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/ccnet/pool.py", line 19, in _create_client client.connect_daemon() File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/ccnet/client.py", line 120, in connect_daemon return self.connect_daemon_with_pipe() File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/ccnet/client.py", line 102, in connect_daemon_with_pipe raise NetworkError("Can't connect to daemon") NetworkError: Can't connect to daemon

Last lines are probably due to the fact that I just restarted Seafile/Seahub for a test.

lins05 commented 9 years ago

So there's no error for the 500 internal error you saw when visiting file history?

You can add "DEBUG = True" in seahub_settings.py and restart seahub, then try to reproduce the error, in this case the error messages when be displayed in your browser.

On Tue, Nov 18, 2014 at 10:25 PM, devillemereuil notifications@github.com wrote:

Whoo, this has to be the record of the quickest response ever! ;)

Here are the last lines in seahub_django_request.log (today's content): 2014-11-18 20:37:59,517 [WARNING] django.request:146 get_response Not Found:/f/e367C3C59a 2014-11-18 21:55:31,075 [WARNING] django.request:146 get_response Not Found: /favicon.ico 2014-11-18 21:55:31,623 [WARNING] django.request:146 get_response Not Found: /favicon.ico 2014-11-18 22:13:04,168 [ERROR] django.request:212 handle_uncaught_exception Internal Server Error: /ajax/unseen-notices-count/ Traceback (most recent call last): File "/home/seafile/seafile-server-3.1.7/seahub/thirdpart/Django-1.5.1-py2.6.egg/django/core/handlers/base.py", line 92, in get_response response = middleware_method(request) File "/home/seafile/seafile-server-3.1.7/seahub/seahub/base/middleware.py", line 22, in process_request username = request.user.username File "/home/seafile/seafile-server-3.1.7/seahub/seahub/auth/middleware.py", line 9, in get request. _cached_user = get_user(request) File "/home/seafile/seafile-server-3.1.7/seahub/seahub/auth/init.py", line 115, in get_user user = backend.get_user(username) or AnonymousUser() File "/home/seafile/seafile-server-3.1.7/seahub/seahub/base/accounts.py", line 247, in get_user user = User.objects.get(email=username) File "/home/seafile/seafile-server-3.1.7/seahub/seahub/base/accounts.py", line 79, in get emailuser = ccnet_threaded_rpc.get_emailuser(email) File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/pysearpc/client.py", line 110, in newfunc ret_str = self.call_remote_func_sync(fcall_str) File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/ccnet/rpc.py", line 71, in call_remote_func_sync client = self.pool.get_client() File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/ccnet/pool.py", line 27, in get_client client = self._create_client() File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/ccnet/pool.py", line 19, in _create_client client.connect_daemon() File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/ccnet/client.py", line 120, in connect_daemon return self.connect_daemon_with_pipe() File "/home/seafile/seafile-server-3.1.7/seafile/lib64/python2.6/site-packages/ccnet/client.py", line 102, in connect_daemon_with_pipe raise NetworkError("Can't connect to daemon") NetworkError: Can't connect to daemon

Last lines are probably due to the fact that I just restarted Seafile/Seahub for a test.

— Reply to this email directly or view it on GitHub https://github.com/haiwen/seahub/issues/305#issuecomment-63477793.

devillemereuil commented 9 years ago

No, no error linked to the time of the problem. Is the "Can't connect to deamon" error normal though?

I tried DEBUG=True in the setting file, then restarting Seahub, I don't have error messages in my browser when the 500 Internal Server Error occur.

While monitoring my server during the "bug" with a top, it seems that seaf-server is reaching a very high CPU usage and then stops a few seconds after the browser display the 500 Internal Server message.

Here is some info I found in log files I thought might be interesting (with great doubts) in seafile.log: [11/18/14 17:21:49] ccnet_processor_handle_update: [Proc] Shutdown processor rpcserver-proc(-1821) for bad update: 515 peer down [11/18/14 17:21:49] ccnet_processor_handle_update: [Proc] Shutdown processor threaded-rpcserver-proc(-1820) for bad update: 515 peer down [11/18/14 17:21:49] ccnet_processor_handle_update: [Proc] Shutdown processor rpcserver-proc(-1827) for bad update: 515 peer down [11/18/14 17:21:49] ccnet_processor_handle_update: [Proc] Shutdown processor threaded-rpcserver-proc(-1819) for bad update: 515 peer down [11/18/14 17:21:49] ccnet_processor_handle_update: [Proc] Shutdown processor threaded-rpcserver-proc(-1818) for bad update: 515 peer down

lins05 commented 9 years ago

When you use DEBUG = True the errors details would be displayed, but it didn't.

Maybe the 500 error was returned by Apache, rather than seafile itself? If the library has a very long history and many changes, the file history calculation would be time consuming. You can check apache logs that.

On Wed, Nov 19, 2014 at 12:34 AM, devillemereuil notifications@github.com wrote:

No, no error linked to the time of the problem. Is the "Can't connect to deamon" error normal though?

I tried DEBUG=True in the setting file, then restarting Seahub, I don't have error messages in my browser when the 500 Internal Server Error occur.

While monitoring my server during the "bug" with a top, it seems that seaf-server is reaching a very high CPU usage and then stops a few seconds after the browser display the 500 Internal Server message.

Here is some info I found in log files I thought might be interesting (with great doubts) in seafile.log: [11/18/14 17:21:49] ccnet_processor_handle_update: [Proc] Shutdown processor rpcserver-proc(-1821) for bad update: 515 peer down [11/18/14 17:21:49] ccnet_processor_handle_update: [Proc] Shutdown processor threaded-rpcserver-proc(-1820) for bad update: 515 peer down [11/18/14 17:21:49] ccnet_processor_handle_update: [Proc] Shutdown processor rpcserver-proc(-1827) for bad update: 515 peer down [11/18/14 17:21:49] ccnet_processor_handle_update: [Proc] Shutdown processor threaded-rpcserver-proc(-1819) for bad update: 515 peer down [11/18/14 17:21:49] ccnet_processor_handle_update: [Proc] Shutdown processor threaded-rpcserver-proc(-1818) for bad update: 515 peer down

— Reply to this email directly or view it on GitHub https://github.com/haiwen/seahub/issues/305#issuecomment-63499902.

devillemereuil commented 9 years ago

Well done!

Indeed, the error lies in the Apache log. I triggered the bug and the following two lines appeared in error.log (blurred with xxxx some info for anonymity reasons):

[Wed Nov 19 11:19:06.558376 2014] [fastcgi:error] [pid 12966] [client xxx.xx.xx.xxx:58124] FastCGI: comm with server "/var/www/seahub.fcgi" aborted: idle timeout (30 sec), referer: https://xxxx/repo/20b6174a-97fd-4398-aab5-ee730fd69527/?p=xxxxxx [Wed Nov 19 11:19:06.558596 2014] [fastcgi:error] [pid 12966] [client xxx.xx.xx.xxx:58124] FastCGI: incomplete headers (0 bytes) received from server "/var/www/seahub.fcgi", referer: https://xxxxx/repo/20b6174a-97fd-4398-aab5-ee730fd69527/?p=xxxxxx

So, it seems that the 500 error is effectively triggered by a too long calculation time. The library is indeed big (62Gb) and pretty ancient.

I guess the way to go would be to increase the "idle timeout" on the Apache side, right? (I'm not fond of the idea of breaking the library into smaller libraries, and I need the history...) Don't know how to do that, but the Web shall be my friend on this.

shoeper commented 9 years ago

Maybe this helps: http://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#fcgididletimeout Another question would be after how much time you get 500 error. If you wait more than a half minute before you get the error it could be a timeout in apche (so seafile calculates too long) - in my opinion it should get tracket down why the calculation needs so much time and it needs to get reduced.

edit: just've seen you've reported it's being a timeout. So the apache site I've refered to should be helpful to temporaryly solve the issue (by increasing timeout time).

devillemereuil commented 9 years ago

Thanks for the link, but this timeout seem to be 300s by default, right? So it's probably not the good one.

A question, if you mind: can I set this parameters within my Seafile virtualhost? Right now, it is parametrised just as explained in the Seafile doc for HTTPS with Apache.

The 500 error arise after exactly 30 seconds (just measured it), so it is in line with the "idle timeout" error from Apache.

Is there anything I can do to help you guys track down the origin of the calculation time? Maybe this is just because I'm asking too much for Seafile (large library and long history).

shoeper commented 9 years ago

this should help to fix it: http://stackoverflow.com/a/17468227

(I'm no seafile team member so can't tell you how we can track down where exactly the issue is (for the long processing time) - but because of another issue I believe it is in the calculation of the contributors see https://github.com/haiwen/seahub/issues/302)

devillemereuil commented 9 years ago

Thanks @shoeper ! I added -idle-timeout 900 in my call for FastCGI in apache2.conf:

seafile

FastCGIExternalServer /var/www/seahub.fcgi -idle-timeout 900 -host 127.0.0.1:8000 FastCGIExternalServer /var/www/seafdav.fcgi -idle-timeout 900 -host 127.0.0.1:8080

Now everything works (it seems that the History page needed just a bit more than 30 seconds to load, hence all I need now is patience, which is fine enough...)

chatainsim commented 9 years ago

I have the same issue. Adding -idle-timeout 900 for FastCGIExternalServer in apache configuration solved partialy the problem but it take a lot's of time so display the file even if it's only 10Ko.

It's more a workaround then a fix. Any lead on how to get a quick answer ?

thank you.

devillemereuil commented 9 years ago

I think the computation time for the history page is not dependent on the size of the file, but on the length of the history and size of the library.

chatainsim commented 9 years ago

Yes, but I also have this issue when opening a file, like picture or text. If you would check a file on the go but it take a very long time to show up it's kind of useless.

shoeper commented 9 years ago

@chatainsim On which hardware and how long is your history (you can set higher value than hundret history entries via url, after klicking hundret entries per page)?

Btw. It's related to #302.

devillemereuil commented 9 years ago

Regarding the viewing of files, something I noticed is that once a file has been displayed in Seahub (with a long wait for computation to be over), if you display it again, it'll be instantaneous (say normally quick). So, once "something" has been computed, it not computed again, at least for a short period of time.

shoeper commented 9 years ago

Could also be simply cached by browser.

chatainsim commented 9 years ago

@shoeper I have no issue with history apparently. Just by viewing the file. And yes @devillemereuil when the file is display once then loading is instantaneous. Regarding issue #302 I have the same issue when openning a text file for example even for a file that can't be displayed in seafile. Hardware is a HP Microserver N36L (AMD Athlon(tm) II Neo N36L Dual-Core Processor) with 4Go of memory

chatainsim commented 9 years ago

Just notice that seaf-server use almost 100% cpu usage when opening a file :

27527 simon 20 0 190m 15m 2632 S 98 0.3 1:51.32 seaf-server And : 27527 simon 20 0 191m 15m 2632 S 99 0.3 2:26.24 seaf-server

devillemereuil commented 9 years ago

Yes, this is the computation that needs to be over for the page to be displayed.

shoeper commented 9 years ago

Cite freeplant from #302 "This most likely to be caused by calculating "contributor" of a file (showing at the top of the file view). We will remove it and test the performance."

Maybe better Question would be how many files are in the folder (In my case it's only slow with many files in the same folder - with 1250 images it takes 15 senconds to view one, with 3 in the folder I can view each image instantly).

chatainsim commented 9 years ago

Ok, so this is why it take long time to open a file ... To bad, I have to upgrade my server :-1: to get a quicker response

devillemereuil commented 9 years ago

@shoeper I have only one contributor in my library. When you say "folder", do you mean "library" ? Because I have the same computation time for every folder in my library.

shoeper commented 9 years ago

I also have only one contributor in that library. And I've just looked in the folder but the library is also way bigger than the other one, where everything runs smooth.

Maybe the newest metadata of each file should get store in a temporary database. But lets wait and hear what @freeplant or someone else of the team say.

Pro commented 9 years ago

Same problem here on Seafile Server 4.0.1 I've quite a big library (260GB, 130.000 files, 20.000 folder) and the history may also be quite long. I'm the only user of this Seafile Server.

Increasing the idle timeout fixed the problem temporarily but I think this problem is related to the huge amount of files and long history and thus git needs some time to reconstruct the file.

freeplant commented 9 years ago

The problem is solved and will be included in the next release.

chatainsim commented 9 years ago

Thanks for the fix ! What's causing this issue ?

freeplant commented 9 years ago

Here are the list of slow problems:

  1. View a file is slow. (solved as in #302 by remove the calculation of contributors)
  2. View trash is slow. (solved in version 4.0.6 server by using a new algorithm)
  3. View file's history page is slow if the library is modified frequently. (not solved yet).
shoeper commented 9 years ago

Maybe some caching using memcached or MySQL could help.