Closed cg923 closed 6 years ago
https://stackoverflow.com/questions/36129259/php7-with-apcu-call-to-undefined-function-apc-fetch
In order to resolve this error: Call to undefined function apc_fetch()
we must add the APCu Backwards Compatiblity Module on top of apcu. Or, find a different method of reading from cache
We should be able to just rename these calls to use apcu_fetch()
etc:
http://php.net/manual/en/intro.apcu.php
http://php.net/manual/en/ref.apcu.php
For some reason the saw version got somewhat confused when trying to write the progress of the catalog export. I had to manually adjust the path to the file when switching from exporting from the UI to exporting from the CLI.
In production, the path is different and theoretically should work out of the box, but we should double check when we deploy that both the UI and CLI exports are working.
@cg923 How about putting the progress info into a new database table rather than writing it to the file-system. That would avoid issues with different file-permissions between the command-line environment and the web UI.
One other potential tweak to avoid running two jobs at once is to store the process-id of a running job in the table/status-file. When trying to run again, see if that process is still running and if not clean out the old file and start the new job. I'm looking for an example of this in PHP (I know I've done it) but here is one place in Bash https://github.com/middlebury/chef/blob/5f159ae533e27ea6e509c58bb374f0213c35b823/cookbooks/web_drupal/templates/default/usr/local/bin/run_serially.erb
Here's a PHP example of that using getmypid()
: https://github.com/adamfranco/CurvatureBuilder/blob/master/bin/process_newer#L37-L53
That makes a lot of sense. I'll move the status of the job to the database.
The table could combine both of your suggestions. Insert a row with a jobID and a progress of 0. Continually update the progress and read it from the DB to update the user on the frontend. When the job is finished, delete the row. If another job is started, check the table to see if any rows are present. If so... add it to the table anyway? Abort? What's our desired behavior?
Easiest thing is probably just not allow abort as it could also leave a partially rendered output.
Sorry, I wasn't totally clear. If a job is running and a second job is started, we should simply not run the second job? Or queue it for later perhaps?
Just don't let it run I think. The only time this should happen is if the user is clicking the button in the UI too many times.
Testing is now complete on sealion. Just need the final deployment on Sunday.
apc_*()
calls