Closed mbo-s closed 5 years ago
The problem is in JobProcessor line 227: https://github.com/yawik/SimpleImport/blob/b0ccd05ba5f514b018dd85665e1186f83a9316c4/src/CrawlerProcessor/JobProcessor.php#L221-L230
The call to the store()
method can throw an exception, which is then not caught, the script exists and no crawler items are stored in the database because it will never be flushed, which happens in ConsoleController line 134:
Handling of these exceptions will be tricky, beacuse we cannot know, what exception is thrown and in what step of the doctrine persistence. In this particular situation, it will be a Solr exception during the postFlush
event, triggered by a listener in the Solr module. That means, the job is persisted to the database, as the exception occurs after the flush.
Possibilities I can think of:
Catch exceptions around the store()
call. If one is caught, check if the job is persisted (Get the job entity from Doctrine and check, if it has an id). If it is, proceed as it nothing happend, because all we care of is that the job is actually stored in the database. If it is not, remove the item from the crawler so it does not get persisted (keep synchronicity)
Catch exceptions around the $processor->execute()
call (ConsoleController:132).
Here we need to check each crawler item, if the associated job is actually persisted in the database,
drop all items which does not have an associated job and flush the database to be in an synchronized state.
I prefer the first option, as it is more narrow to the root of the problem and also to the particular scenario of this issue.
A proper way to handle persistence exceptions (especially those thrown after the actual flush) - or at least a consistence machanism - should be implemented in the YAWIK core.
@kilip
If you decide to work on this, please make your pull request against the upgrade-geocoder
branch (which is effectively the release-0.2 branch)
@TiSiE
I have choose first option, and here's how I handle this store exception:
Exceptions are handled now. Thank you
In combination with the Solr module https://github.com/yawik/Solr it can be that the Solr commit fails. There will be an exception and the import stops. The problem is here: The job gets inserted in the Database but not in the Crawler due to the Solr exception. So the next time the import gets started, the same job will be created again. There is no automic deletion, since the job is not managed by the Crawler