Closed birutaibm closed 6 years ago
Hi! This is roughly a 21Mb import, so it should work with a php.ini setting of
php_value post_max_size 100M
php_value upload_max_filesize 100M
Please try to use a phpinfo()
in the same directory to verify that these settings are being read by PHP. If they are not, please provide more details about your setup (which web server are you using? Apache? Is AllowOverride set on your directory?)
Any progress in this issue?
Partially solved. The error reported (413) was solved, but I am having error HTTP-500. I will make some other test yet. Following is the used code and responses:
> library(opendatabio)
> library(rgeos)
rgeos version: 0.3-26, (SVN revision 560)
GEOS runtime version: 3.5.0-CAPI-1.9.0 r4084
Linking to sp version: 1.2-7
Polygon checking: TRUE
> cfg = odb_config("localhost/opendatabio/api","REGHRFWBJnyo")
> bra0 = readRDS("/home/rafael/Dropbox/TT4/BrasilLocations/BRA_adm0.rds")
> odb_import_locations(sp_to_df(bra0), cfg)
Sending ODB request (filesize = 22960168 )
Error in odb_send_post(data, odb_cfg, "locations") :
Internal Server Error (HTTP 500).
Além disso: Warning message:
In sp_to_df(bra0) :
sp_to_df has been deprecated and will be removed in a future version!
> locs = odb_get_locations(list(limit=1), cfg)
> print(locs)
id name
1 2 Biruta Country
geom
1 POLYGON((10 10,50 10,50 50,10 50,10 10),(20 20,30 20,30 30,20 30,20 20))
levelName
1 Country
As you can see, the get of the same table is working, so the error is related with the process of import but not with the table access.
Error 500 generates a log,with full stack trace, stored in storage/logs/laravel.log. please check its contents
On Feb 6, 2018 09:57, "birutaibm" notifications@github.com wrote:
Partially solved. The error reported (413) was solved, but I am having error HTTP-500. I will make some other test yet. Following is the used code and responses:
library(opendatabio) library(rgeos) rgeos version: 0.3-26, (SVN revision 560) GEOS runtime version: 3.5.0-CAPI-1.9.0 r4084 Linking to sp version: 1.2-7 Polygon checking: TRUE
cfg = odb_config("localhost/opendatabio/api","REGHRFWBJnyo") bra0 = readRDS("/home/rafael/Dropbox/TT4/BrasilLocations/BRA_adm0.rds") odb_import_locations(sp_to_df(bra0), cfg) Sending ODB request (filesize = 22960168 ) Error in odb_send_post(data, odb_cfg, "locations") : Internal Server Error (HTTP 500). Além disso: Warning message: In sp_to_df(bra0) : sp_to_df has been deprecated and will be removed in a future version! locs = odb_get_locations(list(limit=1), cfg) print(locs) id name 1 2 Biruta Country geom 1 POLYGON((10 10,50 10,50 50,10 50,10 10),(20 20,30 20,30 30,20 30,20 20)) levelName 1 Country
As you can see, the get of the same table is working, so the error is related with the process of import but not with the table access.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/opendatabio/opendatabio-r/issues/13#issuecomment-363401061, or mute the thread https://github.com/notifications/unsubscribe-auth/AB5w9TWOPqCEzxiB_6MOYkQZeSwi59N8ks5tSD4xgaJpZM4RxDlE .
I had remove all the opendatabio to be able to test other issues, sorry.
Well, I reinstall and redo every steps to see that the error was related to max_allowed_packet
. The file /home/odbserver/.my.cnf
was ready, but the mysql (I don't know why) was ignoring it. Them I added the configurations to another file (at my Ubuntu pc it is /etc/mysql/mysql.conf.d/mysqld.cnf), restart the mysql service and try again to obtain one more time the HTTP-500 error. Now I don't understand the log, its content is:
[2018-02-08 12:34:04] local.ERROR: Allowed memory size of 268435456 bytes exhausted (tried to allocate 91865088 bytes) {"userId":1,"email":"admin@example.org","exception":"[object] (Symfony\\Component\\Debug\\Exception\\FatalErrorException(code: 1): Allowed memory size of 268435456 bytes exhausted (tried to allocate 91865088 bytes) at /home/odbserver/opendatabio/vendor/barryvdh/laravel-debugbar/src/Storage/FilesystemStorage.php:43)
[stacktrace]
#0 {main}
"}
As far as I know, the ~/.my.cnf file controls how the MySQL client works, not the server. The configurations required for OpenDataBio need to be set on the global configuration files. The error you mention now is probably related to the max memory for the php, what's your current value for the max memory in the Php file used by the Apache server?
On Feb 8, 2018 10:56, "birutaibm" notifications@github.com wrote:
I had remove all the opendatabio to be able to test other issues, sorry. Well, I reinstall and redo every steps to see that the error was related to max_allowed_packet. The file /home/odbserver/.my.cnf was ready, but the mysql (I don't know why) was ignoring it. Them I added the configurations to another file (at my Ubuntu pc it is /etc/mysql/mysql.conf.d/mysqld.cnf), restart the mysql service and try again to obtain one more time the HTTP-500 error. Now I don't understand the log, its content is:
[2018-02-08 12:34:04] local.ERROR: Allowed memory size of 268435456 bytes exhausted (tried to allocate 91865088 bytes) {"userId":1,"email":"admin@example.org","exception":"[object] (Symfony\Component\Debug\Exception\FatalErrorException(code: 1): Allowed memory size of 268435456 bytes exhausted (tried to allocate 91865088 bytes) at /home/odbserver/opendatabio/vendor/barryvdh/laravel-debugbar/src/Storage/FilesystemStorage.php:43) [stacktrace]
"}
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/opendatabio/opendatabio-r/issues/13#issuecomment-364103588, or mute the thread https://github.com/notifications/unsubscribe-auth/AB5w9b8A36Bc7dL7KKjD2Uoa2Trylv78ks5tSu72gaJpZM4RxDlE .
memory_limit = 256M
that is exactly 268,435,456 bytes that the error reports as allowed. But the error reports that I tried to allocate 91,865,088 bytes, that is less than the limit. So the error message makes no sense to me.
Am I interpreting it incorrectly?
The value reported as not being allowed means that php is trying to allocate a single object of around 90Mb which together with the other objects, exceeds the maximum allowed of 256M. Can you increase the directive to 512M and see if it works? I may need to optimize this job memory usage anyway, but it would be good to know if this solves the problem.
On Feb 8, 2018 14:49, "birutaibm" notifications@github.com wrote:
memory_limit = 256M that is exactly 268,435,456 bytes that the error reports as allowed. But the error reports that I tried to allocate 91,865,088 bytes, that is less than the limit. So the error message makes no sense to me. Am I interpreting it incorrectly?
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/opendatabio/opendatabio-r/issues/13#issuecomment-364174173, or mute the thread https://github.com/notifications/unsubscribe-auth/AB5w9V17qUSPDJJb3GxiNS8eE8aDcqkqks5tSyWFgaJpZM4RxDlE .
Sorry my misunderstand.
You are right, with memory_limit = 512M
it creates the job without errors.
Unfortunately, the job do not starts, as you can see in R:
> odb_get_jobs(list(limit=1), cfg)
id created_at updated_at dispatcher
1 1 2018-02-08 12:33:52 2018-02-08 12:34:02 App\\Jobs\\ImportLocations
2 2 2018-02-08 17:16:48 2018-02-08 17:16:50 App\\Jobs\\ImportLocations
status percentage
1 Submitted - %
2 Submitted - %
or in mysql:
mysql -u opendatabio -p opendatabio -e "SELECT id, queue, attempts FROM jobs"
Enter password:
+----+---------+----------+
| id | queue | attempts |
+----+---------+----------+
| 1 | default | 0 |
| 2 | default | 0 |
+----+---------+----------+
Some adicional information:
rafael@rafael-note ~ $ service supervisor status
● supervisor.service - Supervisor process control system for UNIX
Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor prese
Active: active (running) since Qui 2018-02-08 07:10:05 -02; 8h ago
Docs: http://supervisord.org
Main PID: 1245 (supervisord)
CGroup: /system.slice/supervisor.service
└─1245 /usr/bin/python /usr/bin/supervisord -n -c /etc/supervisor/sup
Fev 08 07:10:22 rafael-note supervisord[1245]: 2018-02-08 07:10:22,318 INFO exit
Fev 08 07:10:22 rafael-note supervisord[1245]: 2018-02-08 07:10:22,329 INFO gave
Fev 08 07:10:22 rafael-note supervisord[1245]: 2018-02-08 07:10:22,335 INFO exit
Fev 08 07:10:22 rafael-note supervisord[1245]: 2018-02-08 07:10:22,337 INFO gave
Fev 08 07:10:22 rafael-note supervisord[1245]: 2018-02-08 07:10:22,338 INFO exit
Fev 08 07:10:22 rafael-note supervisord[1245]: 2018-02-08 07:10:22,338 INFO exit
Fev 08 07:10:22 rafael-note supervisord[1245]: 2018-02-08 07:10:22,338 INFO gave
Fev 08 07:10:22 rafael-note supervisord[1245]: 2018-02-08 07:10:22,339 INFO gave
Fev 08 07:10:22 rafael-note supervisord[1245]: 2018-02-08 07:10:22,340 INFO exit
Fev 08 07:10:23 rafael-note supervisord[1245]: 2018-02-08 07:10:23,341 INFO gave
rafael@rafael-note ~ $ more /etc/supervisor/conf.d/opendatabio-worker.conf
[program:opendatabio-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/odbserver/opendatabio/app/artisan queue:work --sleep=3 --tries
=1 --timeout=0 --daemon
autostart=true
autorestart=true
user=odbserver
numprocs=8
redirect_stderr=true
stdout_logfile=/home/odbserver/opendatabio/storage/logs/supervisor.log
rafael@rafael-note ~ $ more /home/odbserver/opendatabio/storage/logs/supervisor.log
more: stat of /home/odbserver/opendatabio/storage/logs/supervisor.log failed: Arquivo ou diretório não encontrado
Your logs and status suggest to me that this is a problem with supervisor not being able to read or understand the opendatabio-worker.conf
file. In the test server, the command systemctl status
returns eight more instances of the supervisor program started, as expected by the numprocs=8
configuration parameter. You may want to check the log for supervisor after "INFO exit..."
root@LabtropServer:~# systemctl status supervisor.service ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2018-01-29 18:20:47 -02; 1 weeks 3 days ago Docs: http://supervisord.org Process: 12068 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Main PID: 12073 (supervisord) Tasks: 9 (limit: 4915) CGroup: /system.slice/supervisor.service ├─12073 /usr/bin/python /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf ├─12077 php /home/opendatabio/opendatabio/artisan queue:work --sleep=3 --tries=1 --timeout=0 --daemon ├─12078 php /home/opendatabio/opendatabio/artisan queue:work --sleep=3 --tries=1 --timeout=0 --daemon ├─12079 php /home/opendatabio/opendatabio/artisan queue:work --sleep=3 --tries=1 --timeout=0 --daemon ├─12080 php /home/opendatabio/opendatabio/artisan queue:work --sleep=3 --tries=1 --timeout=0 --daemon ├─12081 php /home/opendatabio/opendatabio/artisan queue:work --sleep=3 --tries=1 --timeout=0 --daemon ├─12082 php /home/opendatabio/opendatabio/artisan queue:work --sleep=3 --tries=1 --timeout=0 --daemon ├─12083 php /home/opendatabio/opendatabio/artisan queue:work --sleep=3 --tries=1 --timeout=0 --daemon └─12084 php /home/opendatabio/opendatabio/artisan queue:work --sleep=3 --tries=1 --timeout=0 --daemon jan 29 18:20:49 LabtropServer supervisord[12073]: 2018-01-29 18:20:49,135 INFO spawned: 'opendatabio-worker_07' with pid 12083 jan 29 18:20:49 LabtropServer supervisord[12073]: 2018-01-29 18:20:49,137 INFO spawned: 'opendatabio-worker_06' with pid 12084 jan 29 18:20:50 LabtropServer supervisord[12073]: 2018-01-29 18:20:50,139 INFO success: opendatabio-worker_01 entered RUNNING state, process has stayed up for jan 29 18:20:50 LabtropServer supervisord[12073]: 2018-01-29 18:20:50,140 INFO success: opendatabio-worker_00 entered RUNNING state, process has stayed up for jan 29 18:20:50 LabtropServer supervisord[12073]: 2018-01-29 18:20:50,140 INFO success: opendatabio-worker_03 entered RUNNING state, process has stayed up for jan 29 18:20:50 LabtropServer supervisord[12073]: 2018-01-29 18:20:50,140 INFO success: opendatabio-worker_02 entered RUNNING state, process has stayed up for jan 29 18:20:50 LabtropServer supervisord[12073]: 2018-01-29 18:20:50,140 INFO success: opendatabio-worker_05 entered RUNNING state, process has stayed up for jan 29 18:20:50 LabtropServer supervisord[12073]: 2018-01-29 18:20:50,140 INFO success: opendatabio-worker_04 entered RUNNING state, process has stayed up for jan 29 18:20:50 LabtropServer supervisord[12073]: 2018-01-29 18:20:50,140 INFO success: opendatabio-worker_07 entered RUNNING state, process has stayed up for jan 29 18:20:50 LabtropServer supervisord[12073]: 2018-01-29 18:20:50,140 INFO success: opendatabio-worker_06 entered RUNNING state, process has stayed up for
Sorry the delay.
At my last comment the supervisor.log
file did not exists. Well, I don't know what happens, but now it exists and it contains many repetitions of the same line Could not open input file: /home/odbserver/opendatabio/app/artisan
.
In fact, the artisan
file is outside of the app
folder, so I refactor the line command=...
of opendatabio-worker.conf
to the correct path, restart the supervisor, and it works.
Please, refactor the instruction at the code of installation that instructs about the opendatabio-worker.conf
file's contents.
I already changed the sample supervisor file. Thanks for the input, I hope things run more smoothly now
On Feb 19, 2018 11:17, "birutaibm" notifications@github.com wrote:
Sorry the delay. At my last comment the supervisor.log file did not exists. Well, I don't know what happens, but now it exists and it contains many repetitions of the same line Could not open input file: /home/odbserver/opendatabio/ app/artisan. In fact, the artisan file is outside of the app folder, so I refactor the line command=... of opendatabio-worker.conf to the correct path, restart the supervisor, and it works. Please, refactor the instruction at the code of installation that instructs about the opendatabio-worker.conf file's contents.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/opendatabio/opendatabio-r/issues/13#issuecomment-366705601, or mute the thread https://github.com/notifications/unsubscribe-auth/AB5w9RGDOiRL5p2I2RVHYiLWRCtXYBRBks5tWYJbgaJpZM4RxDlE .