Open jpmccu opened 8 years ago
The insert query string is probably too long. How many triples are you seeking to insert with the SPARQL query?
Large Datasets should be saved as a file and loaded with the RDF Bulk Loader as documented.
This query inserted 12k triples. I would want to be able to insert triples, in my case, from graphs generated by RDFlib directly into SPARQLUpdateStore or Virtuoso store-backed Graphs and ConjunctiveGraphs. The VirtBulkRDFLoader isn't a standard data loading approach for RDF stores, but INSERT
(and INSERT DATA
) is. This is part of a use case where users and other data providers upload reasonable but non-trivial sets of triples via an interface for knowledge graph construction and management. For the record, I have had no troubles performing the same operations using the BlazeGraph NanoSparqlServer-based endpoint, and I'll probably be continuing to use them if you can't handle use cases like that using standard interfaces.
we are looking into this ...
Is there any progress on this? This is preventing me from using Virtuoso in my knowledge graph framework, since we use SPARQL update as a standardized interface for graph edits.
I share the same issue with virtuoso open source 7.2 Response virtuoso: Virtuoso 37000 Error SP030: SPARQL compiler, line 4991: memory exhausted at '<....
it seems loading high number of triples is an issue. I tried to insert 9000 triples.
Any progress with this? It's been 4 years, and it would be good to have a reliable online method of adding to knowledge graphs.
I also face this problem. I have to programmatically insert triples into a Virtuoso instance using Jena’s UpdateProcessor
and only get 400 Bad Request
errors. When I insert the same data using curl I see the same memory exhausted error. The limit for me was around 220kb. If the query string is less than this size, the INSERT {GRAPH <....> {...}} WHERE {}
queries succeed.
Does anyone know of an option in virtuoso.ini
that can help increase memory for the SPARQL compiler or some other option while compiling virtuoso open source?
The only workaround I see right now is to chunk the input data and send multiple inserts. Unfortunately, this is not trivial as you have to make assumptions about the semantics of the data and make sure that blank nodes are not messed up.
This is an issue in the SPARQL endpoint and through iSQL in Conductor. When I attempt to run the query in
sio_insert_test.txt
, I get the following error:The
MaxQueryMem
is 3G, which seems like more than enough to handle 12k triples.