Closed jnguyenx closed 7 years ago
@kltm have we seen anything like this for the owltools loader?
Yes, things like it. At first blush, I think it looks like a timeout before finish--either slow over wire or too large. Are you still doing the single blob?
Yes single blob. But it is not a consistent exception, it comes and goes.
I think that it came when we upgraded to solr6, I'll try to update solr first then.
On Mon, Feb 27, 2017 at 6:42 AM, kltm notifications@github.com wrote:
Yes, things like it. At first blush, I think it looks like a timeout before finish--either slow over wire or too large. Are you still doing the single blob?
— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/SciGraph/golr-loader/issues/27#issuecomment-282738987, or mute the thread https://github.com/notifications/unsubscribe-auth/AEHMGD0H_GnXyp2gzUQoDGAOgTQ7YIV9ks5rguDYgaJpZM4MNK5H .
You may be right at the limit then. Maybe try cutting out proxies or loading from local? It may be hard to figure it out without getting really good numbers on usage and time at each step. If you get more fails on a smaller or more "remote" machine, that may also be an indication.
This sounds like a priority ticket - is it still happening?
Yes, it still randomly happens. It failed, the generated json can still be indexed afterwards, it's a manual step. @kshefchek is on it, we talked about it on Friday and this morning.
The PR here: https://github.com/SciGraph/golr-loader/pull/33 is merely duct tape to fix this. It looks like we've outgrown the current approach, the run requires 155g of memory, with 64g in cached mem, we end up swapping which likely causes these intermittent time out issues. Some thoughts on a way forward:
Pros: no additional development needed Cons: Short sited
Optimize current code
Write a golr updater
On ordering the results - apparently you cannot apply an ORDER BY on a UNION of queries, I remember this was discussed when we initially developed the stack, and likely the reason for the huge mapDB approach:
https://github.com/neo4j/neo4j/issues/2725
I can see if there have been any updates since that could serve as a workaround.
@kshefchek Honestly, I think that 1b and 2 are the only real ways forward.
@kshefchek Practically speaking though, it seems like there is probably a fundamental non-scaling issue in the code. While techincally 1b might work, it is likely only deferring the issue a little while. At the very least, I'd vote that 2 should be done until the exact nature of the scaling issue is known.
Neo4J now has procedures that can be used as a workaround for post union processing https://github.com/neo4j-contrib/neo4j-apoc-procedures https://neo4j.com/developer/kb/post-union-processing/
On a test query it seems to work just fine, I can test this out if it sounds like a good way forward.
Taking a closer look, these timeouts occur when neo4j has a long GC pause, for example:
Golr loader: Jul 11, 2017 8:22:52 AM org.monarch.golr.GolrLoader addAndCommitToSolr WARNING: Caught: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://localhost:8983/solr/golr
Neo4J: 2017-07-11 08:21:30.606+0000 WARN [o.n.k.i.c.MonitorGc] GC Monitor: Application threads blocked for 2048ms. 2017-07-11 08:22:52.318+0000 WARN [o.n.k.i.c.MonitorGc] GC Monitor: Application threads blocked for 60676ms. 2017-07-11 08:23:13.748+0000 WARN [o.n.k.i.c.MonitorGc] GC Monitor: Application threads blocked for 789ms.
I have a test branch that uses post union processing to order the results, which results in less memory usage but has longer GC pauses, so the timeouts are more frequent. Knowing now that neo4j is the issue perhaps we can play around with some tuning of memory and GC settings, but we may need to switch to a server with more memory as our graph grows.
This is resolved now, the resolution was replacing UNION with UNION ALL and no longer having neo4j uniquify query results, where this is done in the indexing code anyway. The loader now has a much smaller memory footprint.
Not sure why this is happening, especially with a local solr instance. It doesn't seem to index the file at all.
First thing to do is to fail the process, for now this is a silent exception. Then since the file is not indexed at all, a retry can be attempted, or at least keep the json file for a manual retry.