mff-uk / odcs

ODCleanStore
1 stars 11 forks source link

When repository is lost and then again ok, pipeline is able to survive, but certain data is lost #877

Closed tomas-knap closed 10 years ago

tomas-knap commented 10 years ago

Suppose that method addStatement is called in the for cycle in the DPU to add certain triple for every iteration i. If there is a problem in iteration i with the connection (e.g. Virtuoso frozen and then restarted), the currently processed triple is skipped when the Virtuoso is again on, which must not happen.

According to code:

protected void addStatement(Statement statement, Resource... graphs) {

    RepositoryConnection connection = null;

    try {

        connection = getConnection();
        if (graphs != null) {
            connection.add(statement, graphs);
        } else {
            connection.add(statement);
        }

        //connection.commit();

    } catch (RepositoryException e) {
        hasBrokenConnection = true;
        logger.debug(e.getMessage());

    }

What will happen if there is a problem with connection? Exception is thrown and the currently added triple is gone. So basically YOU have to store the currently added triple (which was not successfully added) and replay the adding of that triple when the repository is again running.

tomas-knap commented 10 years ago

Please propose a solution which will work for all the baseRDF methods

tomesj commented 10 years ago

Summary:

Problem complexly solved in RDF module (everywhere when is used Virtuoso connection ). Issue is prepared for testing and close :-)

tomas-knap commented 10 years ago

Jirko, popis prosim myslenku te tve upravy

tomesj commented 10 years ago

Základní myšlenka úpravy: Abychom zabránili výpadkům v případě DB/Virtuosa, bylo třeba, abychom podchytili všechny operace/metody, které používají nějaké spojení s Virtuosem. Jde o všechny metody týkající se interface RepositoryConnection, kterou náš Virtuoso repozitář používá.

Z tohoto důvodu byla rozšířena třída FailureTolerantRepositoryWrapper (vytvořená původně Honzou), kde už byly dříve ošetřeny pouze základní výpadky.

Nyní byla přidána implementace pro všechny metody interfacu RepositoryConnection, které byly též ošetřeny proti výpadkům.

Tato implementace poté byla nastavena jako výchozí RepositoryConnection pro Virtuoso repozitář- tedy máme podchycené všechny metody, které by se výpadků mohli týkat, kdyby k nim došlo.

Krátkým testováním shozením DB/Virtuosa přímo během execution jsem měl možnost ujistit se o tom, že ať dojde nyní k výpadkům na jakémkoliv místě týkající se Virtuosa, kód dále nepokračuje do chvíle, než je spojení znovu navázáno (volá se implementace v třídě FailureTollerantRepositoryWrapper zajišťující tuto funkcionalitu) či neskončí počet pokusí o re-connection (v defaultním případě se zkouší stále znovu až do nekonečna ).

tomas-knap commented 10 years ago

Jirko, otestuj prosim jeste, ze pri vypadku a obnoveni spojeni se zadna data neztrati. Tedy pust si na necem extraktor, koukni na vysledek. Pak to pust jeste jednou, v prubehu extrakce shod virtuoso a zas nahod (klidne vicekrat) a koukni jestli je pocet vyslednych triplu stejny

tomesj commented 10 years ago

Tohle jsem už jsme zkoušel (hned jako první) a co se týká dat, tak nedochází ke ztrátám :-)

tomas-knap commented 10 years ago

Jirko, nefunguje to dobre, viz odcs-test - pipeline exec 139

2013-12-31 12:39:21,369 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 c.cuni.mff.xrg.odcs.backend.execution.dpu.Executor - Executing DPU ... 2013-12-31 12:39:21,393 [dpu: SPARQL Loader nsoud.cz] INFO exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.RDFLoader - Loading 5000 triples 2013-12-31 12:39:21,560 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 1/50 part loaded successful 2013-12-31 12:39:21,588 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 2/50 part loaded successful 2013-12-31 12:39:21,616 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 3/50 part loaded successful 2013-12-31 12:39:21,644 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 4/50 part loaded successful 2013-12-31 12:39:21,673 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 5/50 part loaded successful 2013-12-31 12:39:21,703 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 6/50 part loaded successful 2013-12-31 12:39:21,733 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 7/50 part loaded successful 2013-12-31 12:39:21,768 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 8/50 part loaded successful 2013-12-31 12:39:21,801 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 9/50 part loaded successful 2013-12-31 12:39:21,833 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 10/50 part loaded successful 2013-12-31 12:39:21,875 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 11/50 part loaded successful 2013-12-31 12:39:21,915 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 12/50 part loaded successful 2013-12-31 12:39:21,952 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 13/50 part loaded successful 2013-12-31 12:39:21,988 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 14/50 part loaded successful 2013-12-31 12:39:22,026 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 15/50 part loaded successful 2013-12-31 12:39:22,063 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 16/50 part loaded successful 2013-12-31 12:39:22,100 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 17/50 part loaded successful 2013-12-31 12:39:22,137 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 18/50 part loaded successful 2013-12-31 12:39:22,175 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 19/50 part loaded successful 2013-12-31 12:39:22,216 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 20/50 part loaded successful 2013-12-31 12:39:22,257 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 21/50 part loaded successful 2013-12-31 12:39:22,271 [taskExecutor-14] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : get connection 2013-12-31 12:39:22,271 [taskExecutor-14] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : flushIntoDatabase 2013-12-31 12:39:22,292 [taskExecutor-14] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() -> finished in: 21 ms 2013-12-31 12:39:22,304 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 22/50 part loaded successful 2013-12-31 12:39:22,350 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 23/50 part loaded successful 2013-12-31 12:39:22,394 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 24/50 part loaded successful 2013-12-31 12:39:22,437 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 25/50 part loaded successful 2013-12-31 12:39:22,481 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 26/50 part loaded successful 2013-12-31 12:39:22,525 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 27/50 part loaded successful 2013-12-31 12:39:22,570 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 28/50 part loaded successful 2013-12-31 12:39:22,660 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 29/50 part loaded successful 2013-12-31 12:39:22,730 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 30/50 part loaded successful 2013-12-31 12:39:22,779 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 31/50 part loaded successful 2013-12-31 12:39:22,825 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 32/50 part loaded successful 2013-12-31 12:39:22,872 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 33/50 part loaded successful 2013-12-31 12:39:22,918 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 34/50 part loaded successful 2013-12-31 12:39:22,965 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 35/50 part loaded successful 2013-12-31 12:39:23,012 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 36/50 part loaded successful 2013-12-31 12:39:23,060 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 37/50 part loaded successful 2013-12-31 12:39:23,108 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 38/50 part loaded successful 2013-12-31 12:39:23,157 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 39/50 part loaded successful 2013-12-31 12:39:23,413 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Problem by creating 40. data part - ATTEMPT number 1: Given query for lazy triples is probably not valid. : SPARQL execute failed:[CONSTRUCT {?a ?b ?c} WHERE {?a ?b ?c} LIMIT 100 OFFSET 4000] Exception:virtuoso.jdbc4.VirtuosoException: Virtuoso Communications Link Failure (timeout) : Connection to the server lost 2013-12-31 12:39:24,416 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 1 attempts. 2013-12-31 12:39:26,417 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 2 attempts. 2013-12-31 12:39:26,571 [taskExecutor-15] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : get connection 2013-12-31 12:39:26,571 [taskExecutor-15] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : flushIntoDatabase 2013-12-31 12:39:26,574 [taskExecutor-15] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() -> finished in: 3 ms 2013-12-31 12:39:28,423 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 3 attempts. 2013-12-31 12:39:29,575 [taskScheduler-6] TRACE exec: dpu: cz.cuni.mff.xrg.odcs.backend.execution.Engine - Checking for new executions. 2013-12-31 12:39:30,425 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 4 attempts. 2013-12-31 12:39:30,871 [taskExecutor-16] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : get connection 2013-12-31 12:39:30,872 [taskExecutor-16] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : flushIntoDatabase 2013-12-31 12:39:30,875 [taskExecutor-16] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() -> finished in: 4 ms 2013-12-31 12:39:32,426 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 5 attempts. 2013-12-31 12:39:34,428 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 6 attempts. 2013-12-31 12:39:35,172 [taskExecutor-3] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : get connection 2013-12-31 12:39:35,172 [taskExecutor-3] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : flushIntoDatabase 2013-12-31 12:39:35,175 [taskExecutor-3] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() -> finished in: 3 ms 2013-12-31 12:39:36,429 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 7 attempts. 2013-12-31 12:39:38,473 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 8 attempts. 2013-12-31 12:39:39,472 [taskExecutor-1] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : get connection 2013-12-31 12:39:39,473 [taskExecutor-1] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : flushIntoDatabase 2013-12-31 12:39:39,475 [taskExecutor-1] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() -> finished in: 3 ms 2013-12-31 12:39:40,474 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 9 attempts. 2013-12-31 12:39:42,476 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 10 attempts. 2013-12-31 12:39:43,773 [taskExecutor-2] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : get connection 2013-12-31 12:39:43,773 [taskExecutor-2] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : flushIntoDatabase 2013-12-31 12:39:43,775 [taskExecutor-2] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() -> finished in: 2 ms 2013-12-31 12:39:44,477 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 11 attempts. 2013-12-31 12:39:46,479 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 12 attempts. 2013-12-31 12:39:48,073 [taskExecutor-4] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : get connection 2013-12-31 12:39:48,073 [taskExecutor-4] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : flushIntoDatabase 2013-12-31 12:39:48,075 [taskExecutor-4] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() -> finished in: 2 ms 2013-12-31 12:39:48,480 [dpu: SPARQL Loader nsoud.cz] WARN exec:139 dpu:166 c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 13 attempts. 2013-12-31 12:39:49,188 [taskExecutor-5] TRACE exec: dpu: cz.cuni.mff.xrg.odcs.backend.monitor.RdfDatabase - check: start 2013-12-31 12:39:49,189 [taskExecutor-5] TRACE exec: dpu: cz.cuni.mff.xrg.odcs.backend.monitor.RdfDatabase - check: ok 2013-12-31 12:39:49,218 [taskExecutor-6] TRACE exec: dpu: cz.cuni.mff.xrg.odcs.backend.monitor.RdfDatabase - executeQuery:start 2013-12-31 12:39:49,218 [taskExecutor-6] INFO exec: dpu: c.c.mff.xrg.odcs.rdf.repositories.VirtuosoRDFRepo - Set new data graph - http://linked.opendata.cz/resource/odcs/internal/ 2013-12-31 12:39:49,219 [taskExecutor-6] INFO exec: dpu: c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Missing config property useExtension, using default value false. 2013-12-31 12:39:49,219 [taskExecutor-6] INFO exec: dpu: c.c.mff.xrg.odcs.rdf.repositories.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/ successfully initialized. 2013-12-31 12:39:49,220 [taskExecutor-6] WARN exec: dpu: c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 1 attempts. 2013-12-31 12:39:49,483 [taskScheduler-1] TRACE exec: dpu: c.c.mff.xrg.odcs.commons.app.scheduling.Schedule - onTimeCheck started 2013-12-31 12:39:49,486 [taskScheduler-1] TRACE exec: dpu: c.c.mff.xrg.odcs.commons.app.scheduling.Schedule - onTimeCheck finished 2013-12-31 12:39:49,578 [taskScheduler-5] TRACE exec: dpu: cz.cuni.mff.xrg.odcs.backend.execution.Engine - Checking for new executions. 2013-12-31 12:39:50,672 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Data 40/50 part loaded successful 2013-12-31 12:39:50,673 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - Query for moving data from temp GRAPH http://test/2013-12-31/nsoud5000/temp to target GRAPH http://test/2013-12-31/nsoud5000 prepared. 2013-12-31 12:39:50,795 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 cz.cuni.mff.xrg.odcs.loader.rdf.SPARQLoader - All data from temp GRAPH http://test/2013-12-31/nsoud5000/temp to GRAPH http://test/2013-12-31/nsoud5000 were moved sucessfully 2013-12-31 12:39:51,031 [dpu: SPARQL Loader nsoud.cz] INFO exec:139 dpu:166 cz.cuni.mff.xrg.odcs.backend.dpu.event.DPUMessage - DPU 'SPARQL Loader nsoud.cz' publish message short: 'Loading data to SPARQL endpoint ends SUCCESSFULLY' long: '' 2013-12-31 12:39:51,038 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 c.cuni.mff.xrg.odcs.backend.execution.dpu.Executor - Executing DPU ... done 2013-12-31 12:39:51,038 [dpu: SPARQL Loader nsoud.cz] TRACE exec:139 dpu:166 c.cuni.mff.xrg.odcs.backend.execution.dpu.Executor - DPU.executeInstance completed .. 2013-12-31 12:39:51,039 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 c.cuni.mff.xrg.odcs.backend.execution.dpu.Executor - Executing post-executors ... done 2013-12-31 12:39:51,039 [dpu: SPARQL Loader nsoud.cz] TRACE exec:139 dpu:166 c.cuni.mff.xrg.odcs.backend.execution.dpu.Executor - Executing post-executor: ContextSaver 2013-12-31 12:39:51,039 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:139 dpu:166 c.cuni.mff.xrg.odcs.backend.execution.dpu.Executor - Executing post-executors ... done 2013-12-31 12:39:51,073 [dpu: SPARQL Loader nsoud.cz] INFO exec:139 dpu:166 cz.cuni.mff.xrg.odcs.backend.dpu.event.DPUEvent - DPU 'SPARQL Loader nsoud.cz' finished 2013-12-31 12:39:51,080 [dpu: SPARQL Loader nsoud.cz] TRACE exec:139 dpu:166 c.cuni.mff.xrg.odcs.backend.execution.dpu.Executor - DPU execution thread is about to finish 2013-12-31 12:39:51,223 [taskExecutor-6] DEBUG exec: dpu: c.c.mff.xrg.odcs.rdf.repositories.VirtuosoRDFRepo - Query select * where {?s ?p ?o} limit 1 is valid. 2013-12-31 12:39:51,224 [taskExecutor-6] DEBUG exec: dpu: c.c.mff.xrg.odcs.rdf.repositories.VirtuosoRDFRepo - Query select * where {?s ?p ?o} limit 1 has not null result. 2013-12-31 12:39:51,225 [taskExecutor-6] TRACE exec: dpu: cz.cuni.mff.xrg.odcs.backend.monitor.RdfDatabase - executeQuery:ends 2013-12-31 12:39:51,227 [Thread-15] DEBUG exec: dpu: c.c.mff.xrg.odcs.rdf.repositories.VirtuosoRDFRepo - Repository with data graph http://linked.opendata.cz/resource/odcs/internal/ destroyed SUCCESSFUL. 2013-12-31 12:39:52,373 [taskExecutor-7] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : get connection 2013-12-31 12:39:52,374 [taskExecutor-7] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() : flushIntoDatabase 2013-12-31 12:39:52,376 [taskExecutor-7] DEBUG exec: dpu: c.c.mff.xrg.odcs.backend.logback.SqlAppenderImpl - flush() -> finished in: 3 ms 2013-12-31 12:39:55,960 [pool-2-thread-2] DEBUG exec:139 dpu: c.c.m.x.o.backend.execution.pipeline.impl.CleanUp - CleanUp start .. 2013-12-31 12:39:55,961 [pool-2-thread-2] INFO exec:139 dpu: c.c.mff.xrg.odcs.rdf.repositories.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/139/dpu/166/du/0 succesfully shut down 2013-12-31 12:39:55,961 [Thread-16] DEBUG exec:139 dpu: c.c.mff.xrg.odcs.rdf.repositories.VirtuosoRDFRepo - Repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/139/dpu/166/du/0 destroyed SUCCESSFUL. 2013-12-31 12:39:55,962 [pool-2-thread-2] INFO exec:139 dpu: c.c.mff.xrg.odcs.rdf.repositories.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/139/dpu/167/du/0 succesfully shut down 2013-12-31 12:39:55,964 [Thread-17] DEBUG exec:139 dpu: c.c.mff.xrg.odcs.rdf.repositories.VirtuosoRDFRepo - Repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/139/dpu/167/du/0 destroyed SUCCESSFUL. 2013-12-31 12:39:55,964 [pool-2-thread-2] DEBUG exec:139 dpu: c.c.m.x.o.backend.execution.pipeline.impl.CleanUp - CleanUp has been finished .. 2013-12-31 12:39:55,968 [pool-2-thread-2] INFO exec:139 dpu: c.c.m.x.o.backend.pipeline.event.PipelineFinished - Pipeline finished with status: FINISHED_WARNING

tomas-knap commented 10 years ago

Kontext problemu: Poustim pipeline s jednim extraktorem a jednim loaderem (116, http://odcs.xrg.cz:8080/odcleanstore-test/#!PipelineList). Pri loadovani zabiju virtuoso ktere pouziva pipeline pro ukladani mezivysledku.

Problem je opet v tom, ze loader je zabit po nahrani 40/50 parts. Pak je databaze znovu restartovana, loader jakoze pokracuje, nicmene ve vyslednem grafu "http://test/2013-12-31/nsoud5000" mam jen 4000 trojic misto 5000 trojic.

Vse na http://odcs.xrg.cz:8080/odcleanstore-test/#!PipelineList

tomas-knap commented 10 years ago

Co je take divne, kdyz prejdu na browse and query, kouknu na vystup ze SPARQL extraktoru, tak mi to ukaze: screen shot 2013-12-31 at 3 01 05 pm

Pritom ve vysledku, tedy to co vypadne ze SPARQL extraktoru a vleze do SPARQL loader a nasledne do cilove db, je 4000 triples

tomas-knap commented 10 years ago

Also not working when db is restarted during SPARQL extractor! Please check that once again, you can try pipeline 117 on ODCS test

I was loading 50000 triples, restarted db and got only : screen shot 2013-12-31 at 3 11 23 pm

tomesj commented 10 years ago

Udělal jsem si stejnou pipeline u sebe, ale nedaří se mi to "nasimulovat" zmíněný problém - v případě výpadku DB a obnovení to nahraje všechny parts - ve výsledném grafu jsou všechna data, co mají být není jich jen 4000 nebo méně - jsou všechny (5000).

Navíc je divné, že v tom logu u Tebe se loadování provádí už po 40/50 parts - což je divné. Ukončení nahrávání nastane jen tehdy, když getInsertQueryPart metoda vrací null a to je jen tehdy, když se na výsledku typu GraphQueryResult první zavolání metody hasNext() vrátí false - tedy nejsou žádné další prvky pro nahrání.

Nemůžu to nijak nasimulovat - u mě to běží vše korektně a správně, jinak bych tuto chybu odhalil již dávno. Co s tím ?

tomas-knap commented 10 years ago

A kdyz porovnas ten muj log a ten tvuj log (pri zabiti a obnovani db), jsou shodne? Nebo se lisi?

tomesj commented 10 years ago

Jinak ještě jedna věc

Může mi to vysvětlit @skodape, proč tomu tak je ? Jak řešíš restart pipeline ?

Kde může být potenciální problém

Tohle je třeba určitě opravit, protože restart pipeline (backendu) se chová vůči datům nekorektně. A to ať jde o výpadky či ne.

skodapetr commented 10 years ago

Máš log? určitě máš .. v souboru .. (zrovna upravuji další dvě issue .. tak by to trochu urychlilo postup .. ) a ideálně před i po restartu .. a možná i na skype ..

tomesj commented 10 years ago

To @tomas-knap Můj log při zabití a obnovení DB je jiný - přikládám tu část

2014-01-01 18:01:20,779 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 29/50 part loaded successful 2014-01-01 18:01:20,883 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 30/50 part loaded successful 2014-01-01 18:01:20,987 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 31/50 part loaded successful 2014-01-01 18:01:21,101 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 32/50 part loaded successful 2014-01-01 18:01:21,238 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 33/50 part loaded successful 2014-01-01 18:01:23,184 [taskScheduler-7] TRACE exec: dpu: c.c.m.x.o.b.execution.Engine - Checking for new executions. 2014-01-01 18:01:23,315 [taskScheduler-7] INFO exec: dpu: o.e.p.logging.default - 2014-01-01 18:01:23.309--UnitOfWork(28384019)--Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002](Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: virtuoso.jdbc4.VirtuosoException: Virtuoso Communications Link Failure (timeout) : Connection to the server lost Error Code: -3 Call: SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (STATUS = ?) bind => [0] Query: ReadAllQuery(referenceClass=PipelineExecution sql="SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (STATUS = ?)"). 2014-01-01 18:01:23,320 [pool-2-thread-1] INFO exec:318 dpu:192 o.e.p.logging.default - 2014-01-01 18:01:23.319--UnitOfWork(22187231)--Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002](Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: virtuoso.jdbc4.VirtuosoException: Virtuoso Communications Link Failure (timeout) : Connection to the server lost Error Code: -3 Call: SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (ID = ?) bind => [318] Query: ReadObjectQuery(name="readPipelineExecution" referenceClass=PipelineExecution sql="SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (ID = ?)"). 2014-01-01 18:01:24,311 [taskScheduler-7] INFO exec: dpu: o.e.p.logging.default - 2014-01-01 18:01:24.311--UnitOfWork(28384019)--Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002](Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect Error Code: -3 Call: SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (STATUS = ?) bind => [0] Query: ReadAllQuery(referenceClass=PipelineExecution sql="SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (STATUS = ?)"). 2014-01-01 18:01:24,321 [pool-2-thread-1] INFO exec:318 dpu:192 o.e.p.logging.default - 2014-01-01 18:01:24.32--UnitOfWork(22187231)--Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002](Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect Error Code: -3 Call: SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (ID = ?) bind => [318] Query: ReadObjectQuery(name="readPipelineExecution" referenceClass=PipelineExecution sql="SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (ID = ?)"). 2014-01-01 18:01:24,331 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - 1/infinity attempt to reconnect http://localhost:8890/sparql-auth FAILED 2014-01-01 18:01:24,683 [taskExecutor-6] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : get connection 2014-01-01 18:01:24,683 [taskExecutor-6] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : flushIntoDatabase 2014-01-01 18:01:24,698 [taskExecutor-6] ERROR exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - Can't save logs into database. virtuoso.jdbc4.VirtuosoException: Virtuoso Communications Link Failure (timeout) : Connection to the server lost at virtuoso.jdbc4.VirtuosoFuture.nextResult(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoResultSet.process_result(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoResultSet.(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoPreparedStatement.(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.prepareStatement(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.prepareStatement(Unknown Source) ~[virtjdbc-4.1.jar:na] at org.apache.commons.dbcp.DelegatingConnection.prepareStatement(DelegatingConnection.java:281) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.prepareStatement(PoolingDataSource.java:313) ~[commons-dbcp-1.4.jar:1.4] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.flushIntoDatabase(SqlAppenderImpl.java:109) [classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.flush(SqlAppenderImpl.java:171) [classes/:na] at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_21] at java.lang.reflect.Method.invoke(Method.java:601) ~[na:1.7.0_21] at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:319) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.interceptor.AsyncExecutionInterceptor$1.call(AsyncExecutionInterceptor.java:83) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [na:1.7.0_21] at java.util.concurrent.FutureTask.run(FutureTask.java:166) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_21] at java.lang.Thread.run(Thread.java:722) [na:1.7.0_21] 2014-01-01 18:01:26,331 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - 2/infinity attempt to reconnect http://localhost:8890/sparql-auth FAILED 2014-01-01 18:01:27,199 [taskExecutor-6] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : get connection 2014-01-01 18:01:28,202 [taskExecutor-6] ERROR exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - Failed to get sql connection, next try in second. virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect at virtuoso.jdbc4.VirtuosoConnection.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.Driver.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148) ~[commons-pool-1.5.4.jar:1.5.4] at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) ~[commons-dbcp-1.4.jar:1.4] at cz.cuni.mff.xrg.odcs.backend.logback.PooledConnectionSource.getConnection(PooledConnectionSource.java:52) ~[classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.getConnection(SqlAppenderImpl.java:87) [classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.flush(SqlAppenderImpl.java:167) [classes/:na] at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_21] at java.lang.reflect.Method.invoke(Method.java:601) ~[na:1.7.0_21] at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:319) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.interceptor.AsyncExecutionInterceptor$1.call(AsyncExecutionInterceptor.java:83) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [na:1.7.0_21] at java.util.concurrent.FutureTask.run(FutureTask.java:166) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_21] at java.lang.Thread.run(Thread.java:722) [na:1.7.0_21] 2014-01-01 18:01:28,330 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - 3/infinity attempt to reconnect http://localhost:8890/sparql-auth FAILED 2014-01-01 18:01:30,204 [taskExecutor-6] ERROR exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - Failed to get sql connection, next try in second. virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect at virtuoso.jdbc4.VirtuosoConnection.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.Driver.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148) ~[commons-pool-1.5.4.jar:1.5.4] at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) ~[commons-dbcp-1.4.jar:1.4] at cz.cuni.mff.xrg.odcs.backend.logback.PooledConnectionSource.getConnection(PooledConnectionSource.java:52) ~[classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.getConnection(SqlAppenderImpl.java:87) [classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.flush(SqlAppenderImpl.java:167) [classes/:na] at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_21] at java.lang.reflect.Method.invoke(Method.java:601) ~[na:1.7.0_21] at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:319) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.interceptor.AsyncExecutionInterceptor$1.call(AsyncExecutionInterceptor.java:83) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [na:1.7.0_21] at java.util.concurrent.FutureTask.run(FutureTask.java:166) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_21] at java.lang.Thread.run(Thread.java:722) [na:1.7.0_21] 2014-01-01 18:01:30,313 [taskScheduler-7] INFO exec: dpu: o.e.p.logging.default - 2014-01-01 18:01:30.313--UnitOfWork(28384019)--Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002](Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect Error Code: -3 Call: SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (STATUS = ?) bind => [0] Query: ReadAllQuery(referenceClass=PipelineExecution sql="SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (STATUS = ?)"). 2014-01-01 18:01:30,323 [pool-2-thread-1] INFO exec:318 dpu:192 o.e.p.logging.default - 2014-01-01 18:01:30.322--UnitOfWork(22187231)--Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002](Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect Error Code: -3 Call: SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (ID = ?) bind => [318] Query: ReadObjectQuery(name="readPipelineExecution" referenceClass=PipelineExecution sql="SELECT ID, t_end, debug_mode, t_last_change, silent_mode, t_start, STATUS, stop, debugnode_id, owner_id, pipeline_id, schedule_id, context_id FROM exec_pipeline WHERE (ID = ?)"). 2014-01-01 18:01:30,332 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - 4/infinity attempt to reconnect http://localhost:8890/sparql-auth FAILED 2014-01-01 18:01:32,208 [taskExecutor-6] ERROR exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - Failed to get sql connection, next try in second. virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect at virtuoso.jdbc4.VirtuosoConnection.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.Driver.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148) ~[commons-pool-1.5.4.jar:1.5.4] at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) ~[commons-dbcp-1.4.jar:1.4] at cz.cuni.mff.xrg.odcs.backend.logback.PooledConnectionSource.getConnection(PooledConnectionSource.java:52) ~[classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.getConnection(SqlAppenderImpl.java:87) [classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.flush(SqlAppenderImpl.java:167) [classes/:na] at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_21] at java.lang.reflect.Method.invoke(Method.java:601) ~[na:1.7.0_21] at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:319) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.interceptor.AsyncExecutionInterceptor$1.call(AsyncExecutionInterceptor.java:83) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [na:1.7.0_21] at java.util.concurrent.FutureTask.run(FutureTask.java:166) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_21] at java.lang.Thread.run(Thread.java:722) [na:1.7.0_21] 2014-01-01 18:01:32,326 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - 5/infinity attempt to reconnect http://localhost:8890/sparql-auth FAILED 2014-01-01 18:01:32,827 [taskScheduler-4] TRACE exec: dpu: c.c.m.x.o.c.a.s.Schedule - onTimeCheck started 2014-01-01 18:01:33,827 [taskScheduler-4] INFO exec: dpu: o.e.p.logging.default - 2014-01-01 18:01:33.827--UnitOfWork(982864)--Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002](Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect Error Code: -3 Call: SELECT ID, DESCRIPTION, enabled, first_exec, just_once, last_exec, NAME, time_period, period_unit, strict_tolerance, strict_timing, type, user_id, pipeline_id FROM exec_schedule WHERE (type = ?) bind => [1] Query: ReadAllQuery(referenceClass=Schedule sql="SELECT ID, DESCRIPTION, enabled, first_exec, just_once, last_exec, NAME, time_period, period_unit, strict_tolerance, strict_timing, type, user_id, pipeline_id FROM exec_schedule WHERE (type = ?)"). 2014-01-01 18:01:34,208 [taskExecutor-6] ERROR exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - Failed to get sql connection, next try in second. virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect at virtuoso.jdbc4.VirtuosoConnection.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.Driver.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148) ~[commons-pool-1.5.4.jar:1.5.4] at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) ~[commons-dbcp-1.4.jar:1.4] at cz.cuni.mff.xrg.odcs.backend.logback.PooledConnectionSource.getConnection(PooledConnectionSource.java:52) ~[classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.getConnection(SqlAppenderImpl.java:87) [classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.flush(SqlAppenderImpl.java:167) [classes/:na] at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_21] at java.lang.reflect.Method.invoke(Method.java:601) ~[na:1.7.0_21] at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:319) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.interceptor.AsyncExecutionInterceptor$1.call(AsyncExecutionInterceptor.java:83) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [na:1.7.0_21] at java.util.concurrent.FutureTask.run(FutureTask.java:166) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_21] at java.lang.Thread.run(Thread.java:722) [na:1.7.0_21] 2014-01-01 18:01:34,329 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - 6/infinity attempt to reconnect http://localhost:8890/sparql-auth FAILED 2014-01-01 18:01:34,827 [taskScheduler-4] INFO exec: dpu: o.e.p.logging.default - 2014-01-01 18:01:34.827--UnitOfWork(982864)--Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002](Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect Error Code: -3 Call: SELECT ID, DESCRIPTION, enabled, first_exec, just_once, last_exec, NAME, time_period, period_unit, strict_tolerance, strict_timing, type, user_id, pipeline_id FROM exec_schedule WHERE (type = ?) bind => [1] Query: ReadAllQuery(referenceClass=Schedule sql="SELECT ID, DESCRIPTION, enabled, first_exec, just_once, last_exec, NAME, time_period, period_unit, strict_tolerance, strict_timing, type, user_id, pipeline_id FROM exec_schedule WHERE (type = ?)"). 2014-01-01 18:01:36,208 [taskExecutor-6] ERROR exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - Failed to get sql connection, next try in second. virtuoso.jdbc4.VirtuosoException: Connection failed: Connection refused: connect at virtuoso.jdbc4.VirtuosoConnection.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.VirtuosoConnection.(Unknown Source) ~[virtjdbc-4.1.jar:na] at virtuoso.jdbc4.Driver.connect(Unknown Source) ~[virtjdbc-4.1.jar:na] at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148) ~[commons-pool-1.5.4.jar:1.5.4] at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) ~[commons-dbcp-1.4.jar:1.4] at cz.cuni.mff.xrg.odcs.backend.logback.PooledConnectionSource.getConnection(PooledConnectionSource.java:52) ~[classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.getConnection(SqlAppenderImpl.java:87) [classes/:na] at cz.cuni.mff.xrg.odcs.backend.logback.SqlAppenderImpl.flush(SqlAppenderImpl.java:167) [classes/:na] at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_21] at java.lang.reflect.Method.invoke(Method.java:601) ~[na:1.7.0_21] at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:319) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at org.springframework.aop.interceptor.AsyncExecutionInterceptor$1.call(AsyncExecutionInterceptor.java:83) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE] at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [na:1.7.0_21] at java.util.concurrent.FutureTask.run(FutureTask.java:166) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_21] at java.lang.Thread.run(Thread.java:722) [na:1.7.0_21] 2014-01-01 18:01:36,327 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - 7/infinity attempt to reconnect http://localhost:8890/sparql-auth FAILED 2014-01-01 18:01:37,225 [taskExecutor-6] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : flushIntoDatabase 2014-01-01 18:01:37,258 [taskExecutor-6] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() -> finished in: 12575 ms 2014-01-01 18:01:37,259 [taskExecutor-8] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : get connection 2014-01-01 18:01:37,259 [taskExecutor-8] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : flushIntoDatabase 2014-01-01 18:01:37,267 [taskExecutor-8] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() -> finished in: 8 ms 2014-01-01 18:01:37,440 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 34/50 part loaded successful 2014-01-01 18:01:37,443 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Problem by creating 35. data part - ATTEMPT number 1: Given query for lazy triples is probably not valid. : SPARQL execute failed:[CONSTRUCT {?a ?b ?c} WHERE {?a ?b ?c} LIMIT 100 OFFSET 3500] Exception:virtuoso.jdbc4.VirtuosoException: Virtuoso Communications Link Failure (timeout) : Software caused connection abort: recv failed 2014-01-01 18:01:37,583 [taskExecutor-9] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : get connection 2014-01-01 18:01:37,583 [taskExecutor-9] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : flushIntoDatabase 2014-01-01 18:01:37,588 [taskExecutor-9] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() -> finished in: 5 ms 2014-01-01 18:01:38,595 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 35/50 part loaded successful 2014-01-01 18:01:38,729 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 36/50 part loaded successful 2014-01-01 18:01:38,871 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 37/50 part loaded successful 2014-01-01 18:01:39,026 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 38/50 part loaded successful 2014-01-01 18:01:39,186 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 39/50 part loaded successful 2014-01-01 18:01:39,332 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 40/50 part loaded successful 2014-01-01 18:01:39,514 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 41/50 part loaded successful 2014-01-01 18:01:39,675 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 42/50 part loaded successful 2014-01-01 18:01:39,824 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 43/50 part loaded successful 2014-01-01 18:01:39,838 [taskScheduler-4] TRACE exec: dpu: c.c.m.x.o.c.a.s.Schedule - onTimeCheck finished 2014-01-01 18:01:39,990 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 44/50 part loaded successful 2014-01-01 18:01:40,143 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 45/50 part loaded successful 2014-01-01 18:01:40,304 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 46/50 part loaded successful 2014-01-01 18:01:40,462 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 47/50 part loaded successful 2014-01-01 18:01:40,621 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 48/50 part loaded successful 2014-01-01 18:01:40,792 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 49/50 part loaded successful 2014-01-01 18:01:40,944 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Data 50/50 part loaded successful 2014-01-01 18:01:40,945 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - Query for moving data from temp GRAPH http://nsoud5000/temp to target GRAPH http://nsoud5000 prepared. 2014-01-01 18:01:41,293 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.l.rdf.SPARQLoader - All data from temp GRAPH http://nsoud5000/temp to GRAPH http://nsoud5000 were moved sucessfully

tomas-knap commented 10 years ago

Tak ho prosim porovnej a napis rozdily

tomas-knap commented 10 years ago

Popis prosim jak se to chova v pripade ze vypadne spojeni v pripade toho extraktoru nebo loaderu, nejak schematicky ve stylu:

vola se BaseRDFRepo.getConnection(), ta vola ...

tomas-knap commented 10 years ago

Jirko a kdyz tedy u sebe zkontrolujes pres /sparql rozhrani virtuosa ten graf: http://nsoud5000, tak tam mas 5000 triples? Pouzivas virtuoso pro mezivysledky? To je dulezite, pokud ne, nastav si virtuoso

tomesj commented 10 years ago

image image

To @skodape Takhle vypadá log po restartu - navíc každý restart přidává input/output do Browse/Query - první má vždy ty data než došlo k restartu, zbylé jsou už prázdné (viz sceen)

2014-01-01 18:08:28,848 [main] INFO exec: dpu: c.c.m.x.odcs.backend.AppEntry - Running ... 2014-01-01 18:08:28,877 [taskExecutor-3] DEBUG exec: dpu: o.o.r.DatatypeHandlerRegistry - Registered service class org.openrdf.rio.datatypes.XMLSchemaDatatypeHandler 2014-01-01 18:08:28,882 [taskExecutor-3] DEBUG exec: dpu: o.o.r.DatatypeHandlerRegistry - Registered service class org.openrdf.rio.datatypes.RDFDatatypeHandler 2014-01-01 18:08:28,885 [taskExecutor-3] DEBUG exec: dpu: o.o.r.DatatypeHandlerRegistry - Registered service class org.openrdf.rio.datatypes.DBPediaDatatypeHandler 2014-01-01 18:08:28,888 [taskExecutor-3] DEBUG exec: dpu: o.o.r.DatatypeHandlerRegistry - Registered service class org.openrdf.rio.datatypes.VirtuosoGeometryDatatypeHandler 2014-01-01 18:08:28,894 [taskExecutor-3] DEBUG exec: dpu: o.o.r.DatatypeHandlerRegistry - Registered service class org.openrdf.rio.datatypes.GeoSPARQLDatatypeHandler 2014-01-01 18:08:28,902 [taskExecutor-3] DEBUG exec: dpu: o.o.r.LanguageHandlerRegistry - Registered service class org.openrdf.rio.languages.RFC3066LanguageHandler 2014-01-01 18:08:28,914 [taskExecutor-3] DEBUG exec: dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Query select * where {?s ?p ?o} limit 1 is valid. 2014-01-01 18:08:28,924 [taskExecutor-3] DEBUG exec: dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Query select * where {?s ?p ?o} limit 1 has not null result. 2014-01-01 18:08:28,930 [taskExecutor-3] TRACE exec: dpu: c.c.m.x.o.b.m.RdfDatabase - executeQuery:ends 2014-01-01 18:08:28,936 [Thread-4] DEBUG exec: dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Repository with data graph http://linked.opendata.cz/resource/odcs/internal/ destroyed SUCCESSFUL. 2014-01-01 18:08:32,848 [taskExecutor-4] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : get connection 2014-01-01 18:08:32,850 [taskExecutor-4] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : flushIntoDatabase 2014-01-01 18:08:32,861 [taskExecutor-4] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() -> finished in: 13 ms 2014-01-01 18:08:48,918 [taskScheduler-3] TRACE exec: dpu: c.c.m.x.o.b.execution.Engine - Checking for new executions. 2014-01-01 18:08:48,997 [pool-2-thread-1] INFO exec:318 dpu: c.c.m.x.o.b.p.e.PipelineInfo - Starting execution number: 318 pipeline: Test Extractor nsoud.cz 2014-01-01 18:08:49,201 [pool-2-thread-1] INFO exec:318 dpu:191 c.c.m.x.o.r.r.VirtuosoRDFRepo - Set new data graph - http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/0 2014-01-01 18:08:49,202 [pool-2-thread-1] INFO exec:318 dpu:191 c.c.m.x.o.r.r.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/0 successfully initialized. 2014-01-01 18:08:49,203 [pool-2-thread-1] INFO exec:318 dpu:191 c.c.m.x.o.r.r.VirtuosoRDFRepo - Set new data graph - http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/0 2014-01-01 18:08:49,204 [pool-2-thread-1] INFO exec:318 dpu:191 c.c.m.x.o.r.r.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/0 successfully initialized. 2014-01-01 18:08:49,204 [pool-2-thread-1] INFO exec:318 dpu:191 c.c.m.x.o.b.e.p.Executor - Starting execution of dpu 191 = SPARQL Extractor nsoud.cz 2014-01-01 18:08:49,219 [dpu: SPARQL Extractor nsoud.cz] DEBUG exec:318 dpu:191 c.c.m.x.o.c.a.m.o.OSGIModuleFacade - Installing bundle from 'SPARQL_Extractor'/'SPARQL_Extractor-1.0.0.jar' 2014-01-01 18:08:49,610 [dpu: SPARQL Extractor nsoud.cz] DEBUG exec:318 dpu:191 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executors ... 2014-01-01 18:08:49,611 [dpu: SPARQL Extractor nsoud.cz] TRACE exec:318 dpu:191 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executor: ContextPreparator 2014-01-01 18:08:49,611 [dpu: SPARQL Extractor nsoud.cz] TRACE exec:318 dpu:191 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executor: Configurator 2014-01-01 18:08:49,612 [dpu: SPARQL Extractor nsoud.cz] DEBUG exec:318 dpu:191 c.c.m.x.o.c.app.dpu.DPURecord - getRawConf for: SPARQL Extractor nsoud.cz on: [60, 111, 98, 106, 101, 99, 116, 45, 115, 116, 114, 101, 97, 109, 62, 10, 32, 32, 60, 99, 122, 46, 99, 117, 110, 105, 46, 109, 102, 102, 46, 120, 114, 103, 46, 111, 100, 99, 115, 46, 101, 120, 116, 114, 97, 99, 116, 111, 114, 46, 114, 100, 102, 46, 82, 68, 70, 69, 120, 116, 114, 97, 99, 116, 111, 114, 67, 111, 110, 102, 105, 103, 62, 10, 32, 32, 32, 32, 60, 83, 80, 65, 82, 81, 76, 95, 95, 101, 110, 100, 112, 111, 105, 110, 116, 62, 104, 116, 116, 112, 58, 47, 47, 105, 110, 116, 101, 114, 110, 97, 108, 46, 120, 114, 103, 46, 99, 122, 58, 56, 56, 57, 48, 47, 115, 112, 97, 114, 113, 108, 60, 47, 83, 80, 65, 82, 81, 76, 95, 95, 101, 110, 100, 112, 111, 105, 110, 116, 62, 10, 32, 32, 32, 32, 60, 72, 111, 115, 116, 95, 95, 110, 97, 109, 101, 62, 60, 47, 72, 111, 115, 116, 95, 95, 110, 97, 109, 101, 62, 10, 32, 32, 32, 32, 60, 80, 97, 115, 115, 119, 111, 114, 100, 62, 60, 47, 80, 97, 115, 115, 119, 111, 114, 100, 62, 10, 32, 32, 32, 32, 60, 71, 114, 97, 112, 104, 115, 85, 114, 105, 32, 99, 108, 97, 115, 115, 61, 34, 108, 105, 110, 107, 101, 100, 45, 108, 105, 115, 116, 34, 62, 10, 32, 32, 32, 32, 32, 32, 60, 115, 116, 114, 105, 110, 103, 62, 104, 116, 116, 112, 58, 47, 47, 108, 105, 110, 107, 101, 100, 46, 111, 112, 101, 110, 100, 97, 116, 97, 46, 99, 122, 47, 114, 101, 115, 111, 117, 114, 99, 101, 47, 100, 97, 116, 97, 115, 101, 116, 47, 108, 101, 103, 105, 115, 108, 97, 116, 105, 111, 110, 47, 110, 115, 111, 117, 100, 46, 99, 122, 60, 47, 115, 116, 114, 105, 110, 103, 62, 10, 32, 32, 32, 32, 60, 47, 71, 114, 97, 112, 104, 115, 85, 114, 105, 62, 10, 32, 32, 32, 32, 60, 83, 80, 65, 82, 81, 76, 95, 95, 113, 117, 101, 114, 121, 62, 99, 111, 110, 115, 116, 114, 117, 99, 116, 32, 123, 63, 115, 32, 63, 112, 32, 63, 111, 125, 32, 119, 104, 101, 114, 101, 32, 123, 63, 115, 32, 63, 112, 32, 63, 111, 125, 32, 108, 105, 109, 105, 116, 32, 53, 48, 48, 48, 60, 47, 83, 80, 65, 82, 81, 76, 95, 95, 113, 117, 101, 114, 121, 62, 10, 32, 32, 32, 32, 60, 69, 120, 116, 114, 97, 99, 116, 70, 97, 105, 108, 62, 116, 114, 117, 101, 60, 47, 69, 120, 116, 114, 97, 99, 116, 70, 97, 105, 108, 62, 10, 32, 32, 32, 32, 60, 85, 115, 101, 83, 116, 97, 116, 105, 115, 116, 105, 99, 97, 108, 72, 97, 110, 100, 108, 101, 114, 62, 102, 97, 108, 115, 101, 60, 47, 85, 115, 101, 83, 116, 97, 116, 105, 115, 116, 105, 99, 97, 108, 72, 97, 110, 100, 108, 101, 114, 62, 10, 32, 32, 32, 32, 60, 102, 97, 105, 108, 87, 104, 101, 110, 69, 114, 114, 111, 114, 115, 62, 102, 97, 108, 115, 101, 60, 47, 102, 97, 105, 108, 87, 104, 101, 110, 69, 114, 114, 111, 114, 115, 62, 10, 32, 32, 32, 32, 60, 114, 101, 116, 114, 121, 84, 105, 109, 101, 62, 49, 48, 48, 48, 60, 47, 114, 101, 116, 114, 121, 84, 105, 109, 101, 62, 10, 32, 32, 32, 32, 60, 114, 101, 116, 114, 121, 83, 105, 122, 101, 62, 45, 49, 60, 47, 114, 101, 116, 114, 121, 83, 105, 122, 101, 62, 10, 32, 32, 60, 47, 99, 122, 46, 99, 117, 110, 105, 46, 109, 102, 102, 46, 120, 114, 103, 46, 111, 100, 99, 115, 46, 101, 120, 116, 114, 97, 99, 116, 111, 114, 46, 114, 100, 102, 46, 82, 68, 70, 69, 120, 116, 114, 97, 99, 116, 111, 114, 67, 111, 110, 102, 105, 103, 62, 10, 60, 47, 111, 98, 106, 101, 99, 116, 45, 115, 116, 114, 101, 97, 109, 62] 2014-01-01 18:08:49,681 [dpu: SPARQL Extractor nsoud.cz] DEBUG exec:318 dpu:191 c.c.m.x.o.b.e.d.i.Configurator - DPU SPARQL Extractor nsoud.cz hes been configured. 2014-01-01 18:08:49,682 [dpu: SPARQL Extractor nsoud.cz] TRACE exec:318 dpu:191 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executor: AnnotationsOutput 2014-01-01 18:08:49,688 [dpu: SPARQL Extractor nsoud.cz] INFO exec:318 dpu:191 c.c.m.x.o.r.r.VirtuosoRDFRepo - Set new data graph - http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/1 2014-01-01 18:08:49,688 [dpu: SPARQL Extractor nsoud.cz] INFO exec:318 dpu:191 c.c.m.x.o.r.r.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/1 successfully initialized. 2014-01-01 18:08:49,689 [dpu: SPARQL Extractor nsoud.cz] DEBUG exec:318 dpu:191 c.c.m.x.o.b.e.d.i.AnnotationsOutput - Create output DataUnit for field: rdfDataUnit 2014-01-01 18:08:49,689 [dpu: SPARQL Extractor nsoud.cz] TRACE exec:318 dpu:191 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executor: AnnotationsInput 2014-01-01 18:08:49,690 [dpu: SPARQL Extractor nsoud.cz] TRACE exec:318 dpu:191 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executor: Restarter 2014-01-01 18:08:49,691 [dpu: SPARQL Extractor nsoud.cz] DEBUG exec:318 dpu:191 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executors ... done 2014-01-01 18:08:49,691 [dpu: SPARQL Extractor nsoud.cz] INFO exec:318 dpu:191 c.c.m.x.o.b.e.dpu.Executor - DPU has already been executed, skipping the execution. 2014-01-01 18:08:49,693 [dpu: SPARQL Extractor nsoud.cz] INFO exec:318 dpu:191 c.c.m.x.o.b.dpu.event.DPUEvent - DPU 'SPARQL Extractor nsoud.cz' finished 2014-01-01 18:08:49,712 [dpu: SPARQL Extractor nsoud.cz] TRACE exec:318 dpu:191 c.c.m.x.o.b.e.dpu.Executor - DPU execution thread is about to finish 2014-01-01 18:08:50,051 [taskExecutor-8] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : get connection 2014-01-01 18:08:50,053 [taskExecutor-8] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : flushIntoDatabase 2014-01-01 18:08:50,061 [taskExecutor-8] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() -> finished in: 10 ms 2014-01-01 18:08:54,223 [pool-2-thread-1] INFO exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - Set new data graph - http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/192/du/0 2014-01-01 18:08:54,224 [pool-2-thread-1] INFO exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/192/du/0 successfully initialized. 2014-01-01 18:08:54,224 [pool-2-thread-1] INFO exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - Set new data graph - http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/192/du/0 2014-01-01 18:08:54,225 [pool-2-thread-1] INFO exec:318 dpu:192 c.c.m.x.o.r.r.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/192/du/0 successfully initialized. 2014-01-01 18:08:54,226 [pool-2-thread-1] INFO exec:318 dpu:192 c.c.m.x.o.b.e.p.Executor - Starting execution of dpu 192 = SPARQL Loader nsoud.cz 2014-01-01 18:08:54,231 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.c.a.m.o.OSGIModuleFacade - Installing bundle from 'SPARQL_Loader'/'SPARQL_Loader-1.0.0.jar' 2014-01-01 18:08:54,252 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executors ... 2014-01-01 18:08:54,252 [dpu: SPARQL Loader nsoud.cz] TRACE exec:318 dpu:192 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executor: ContextPreparator 2014-01-01 18:08:54,252 [dpu: SPARQL Loader nsoud.cz] TRACE exec:318 dpu:192 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executor: Configurator 2014-01-01 18:08:54,253 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.c.app.dpu.DPURecord - getRawConf for: SPARQL Loader nsoud.cz on: [60, 111, 98, 106, 101, 99, 116, 45, 115, 116, 114, 101, 97, 109, 62, 10, 32, 32, 60, 99, 122, 46, 99, 117, 110, 105, 46, 109, 102, 102, 46, 120, 114, 103, 46, 111, 100, 99, 115, 46, 108, 111, 97, 100, 101, 114, 46, 114, 100, 102, 46, 82, 68, 70, 76, 111, 97, 100, 101, 114, 67, 111, 110, 102, 105, 103, 62, 10, 32, 32, 32, 32, 60, 83, 80, 65, 82, 81, 76, 95, 95, 101, 110, 100, 112, 111, 105, 110, 116, 62, 104, 116, 116, 112, 58, 47, 47, 108, 111, 99, 97, 108, 104, 111, 115, 116, 58, 56, 56, 57, 48, 47, 115, 112, 97, 114, 113, 108, 45, 97, 117, 116, 104, 60, 47, 83, 80, 65, 82, 81, 76, 95, 95, 101, 110, 100, 112, 111, 105, 110, 116, 62, 10, 32, 32, 32, 32, 60, 72, 111, 115, 116, 95, 95, 110, 97, 109, 101, 62, 100, 98, 97, 60, 47, 72, 111, 115, 116, 95, 95, 110, 97, 109, 101, 62, 10, 32, 32, 32, 32, 60, 80, 97, 115, 115, 119, 111, 114, 100, 62, 100, 98, 97, 60, 47, 80, 97, 115, 115, 119, 111, 114, 100, 62, 10, 32, 32, 32, 32, 60, 71, 114, 97, 112, 104, 115, 85, 114, 105, 32, 99, 108, 97, 115, 115, 61, 34, 108, 105, 110, 107, 101, 100, 45, 108, 105, 115, 116, 34, 62, 10, 32, 32, 32, 32, 32, 32, 60, 115, 116, 114, 105, 110, 103, 62, 104, 116, 116, 112, 58, 47, 47, 110, 115, 111, 117, 100, 53, 48, 48, 48, 60, 47, 115, 116, 114, 105, 110, 103, 62, 10, 32, 32, 32, 32, 60, 47, 71, 114, 97, 112, 104, 115, 85, 114, 105, 62, 10, 32, 32, 32, 32, 60, 103, 114, 97, 112, 104, 79, 112, 116, 105, 111, 110, 62, 79, 86, 69, 82, 82, 73, 68, 69, 60, 47, 103, 114, 97, 112, 104, 79, 112, 116, 105, 111, 110, 62, 10, 32, 32, 32, 32, 60, 105, 110, 115, 101, 114, 116, 79, 112, 116, 105, 111, 110, 62, 83, 84, 79, 80, 95, 87, 72, 69, 78, 95, 66, 65, 68, 95, 80, 65, 82, 84, 60, 47, 105, 110, 115, 101, 114, 116, 79, 112, 116, 105, 111, 110, 62, 10, 32, 32, 32, 32, 60, 99, 104, 117, 110, 107, 83, 105, 122, 101, 62, 49, 48, 48, 60, 47, 99, 104, 117, 110, 107, 83, 105, 122, 101, 62, 10, 32, 32, 32, 32, 60, 118, 97, 108, 105, 100, 68, 97, 116, 97, 66, 101, 102, 111, 114, 101, 62, 102, 97, 108, 115, 101, 60, 47, 118, 97, 108, 105, 100, 68, 97, 116, 97, 66, 101, 102, 111, 114, 101, 62, 10, 32, 32, 32, 32, 60, 114, 101, 116, 114, 121, 84, 105, 109, 101, 62, 49, 48, 48, 48, 60, 47, 114, 101, 116, 114, 121, 84, 105, 109, 101, 62, 10, 32, 32, 32, 32, 60, 114, 101, 116, 114, 121, 83, 105, 122, 101, 62, 45, 49, 60, 47, 114, 101, 116, 114, 121, 83, 105, 122, 101, 62, 10, 32, 32, 60, 47, 99, 122, 46, 99, 117, 110, 105, 46, 109, 102, 102, 46, 120, 114, 103, 46, 111, 100, 99, 115, 46, 108, 111, 97, 100, 101, 114, 46, 114, 100, 102, 46, 82, 68, 70, 76, 111, 97, 100, 101, 114, 67, 111, 110, 102, 105, 103, 62, 10, 60, 47, 111, 98, 106, 101, 99, 116, 45, 115, 116, 114, 101, 97, 109, 62] 2014-01-01 18:08:54,258 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.b.e.d.i.Configurator - DPU SPARQL Loader nsoud.cz hes been configured. 2014-01-01 18:08:54,258 [dpu: SPARQL Loader nsoud.cz] TRACE exec:318 dpu:192 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executor: AnnotationsOutput 2014-01-01 18:08:54,261 [dpu: SPARQL Loader nsoud.cz] TRACE exec:318 dpu:192 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executor: AnnotationsInput 2014-01-01 18:08:54,262 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.b.e.d.i.AnnotationsInput - Assign DataUnit URI: input for field: rdfDataUnit 2014-01-01 18:08:54,262 [dpu: SPARQL Loader nsoud.cz] TRACE exec:318 dpu:192 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executor: Restarter 2014-01-01 18:08:54,262 [dpu: SPARQL Loader nsoud.cz] DEBUG exec:318 dpu:192 c.c.m.x.o.b.e.dpu.Executor - Executing pre-executors ... done 2014-01-01 18:08:54,263 [dpu: SPARQL Loader nsoud.cz] INFO exec:318 dpu:192 c.c.m.x.o.b.e.dpu.Executor - DPU has already been executed, skipping the execution. 2014-01-01 18:08:54,264 [dpu: SPARQL Loader nsoud.cz] INFO exec:318 dpu:192 c.c.m.x.o.b.dpu.event.DPUEvent - DPU 'SPARQL Loader nsoud.cz' finished 2014-01-01 18:08:54,281 [dpu: SPARQL Loader nsoud.cz] TRACE exec:318 dpu:192 c.c.m.x.o.b.e.dpu.Executor - DPU execution thread is about to finish 2014-01-01 18:08:54,352 [taskExecutor-9] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : get connection 2014-01-01 18:08:54,354 [taskExecutor-9] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() : flushIntoDatabase 2014-01-01 18:08:54,361 [taskExecutor-9] DEBUG exec: dpu: c.c.m.x.o.b.l.SqlAppenderImpl - flush() -> finished in: 9 ms 2014-01-01 18:08:58,830 [taskScheduler-8] TRACE exec: dpu: c.c.m.x.o.c.a.s.Schedule - onTimeCheck started 2014-01-01 18:08:58,838 [taskScheduler-8] TRACE exec: dpu: c.c.m.x.o.c.a.s.Schedule - onTimeCheck finished 2014-01-01 18:08:59,231 [pool-2-thread-1] DEBUG exec:318 dpu: c.c.m.x.o.b.e.p.impl.CleanUp - CleanUp start .. 2014-01-01 18:08:59,233 [pool-2-thread-1] INFO exec:318 dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/0 succesfully shut down 2014-01-01 18:08:59,234 [pool-2-thread-1] INFO exec:318 dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/0 succesfully shut down 2014-01-01 18:08:59,236 [pool-2-thread-1] INFO exec:318 dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/1 succesfully shut down 2014-01-01 18:08:59,238 [pool-2-thread-1] INFO exec:318 dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/192/du/0 succesfully shut down 2014-01-01 18:08:59,240 [pool-2-thread-1] INFO exec:318 dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Virtuoso repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/192/du/0 succesfully shut down 2014-01-01 18:08:59,243 [pool-2-thread-1] DEBUG exec:318 dpu: c.c.m.x.o.b.e.p.impl.CleanUp - CleanUp has been finished .. 2014-01-01 18:08:59,255 [Thread-7] DEBUG exec:318 dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/1 destroyed SUCCESSFUL. 2014-01-01 18:08:59,268 [Thread-8] DEBUG exec:318 dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/192/du/0 destroyed SUCCESSFUL. 2014-01-01 18:08:59,276 [Thread-5] DEBUG exec:318 dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/0 destroyed SUCCESSFUL. 2014-01-01 18:08:59,279 [Thread-6] DEBUG exec:318 dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/191/du/0 destroyed SUCCESSFUL. 2014-01-01 18:08:59,282 [Thread-9] DEBUG exec:318 dpu: c.c.m.x.o.r.r.VirtuosoRDFRepo - Repository with data graph http://linked.opendata.cz/resource/odcs/internal/pipeline/exec/318/dpu/192/du/0 destroyed SUCCESSFUL. 2014-01-01 18:08:59,345 [pool-2-thread-1] INFO exec:318 dpu: c.c.m.x.o.b.p.e.PipelineFinished - Pipeline finished with status: FINISHED_SUCCESS

skodapetr commented 10 years ago

Až si to vyříkáte spolu . tak se Jirko prosím podíváme na ten neslušná backend. Neb jinak nevím zda se tu v těch lozích vyznáme ..

tomesj commented 10 years ago

Používám Virtuoso pro mezivýsledky a v grafu http://nsoud5000, mam těch 5000 triples

skodapetr commented 10 years ago

Tomáš: prosím chvilku počkej .. kouknu se na to jak se chová backend (i když čekal bych, že tam pak nebude výstup žádný .. ) nicméně tohle má Jirka pravdu bychom měli mít opravené .. neb pak data mizí všechna a né jen nějaká. Jen mě překvapuje, že se na to přišlo až teď trochu .. ale tak hlavně že přišlo.

tomesj commented 10 years ago

Takže k výpadkům: Extractor i loader jsou co se týká výpadků prakticky stejné, tedy vyberu si na ukázku SPARQL loader

Rozdělení na 2 případy 1) Během tvorby data part

2) Při nahrání data parts

Končí se, pokud jde o následující: (vyhodí se výjimka a nahrání končí)

tomas-knap commented 10 years ago

Jeste jedna vec, jake mas detail nastaveni u extractoru/loaderu na te tvoji pipeline? Bude se to chovat asi malinko jinak kdyz je zakrtnuty error handler a kdyz neni napr.

A neni tam problem s nastavenim named graphu pri restartu connection? Pripadne neni problem s objektem rdfparseru?

skodapetr commented 10 years ago

Tak .. něco jsem v backendu upravil .. ale ještě jsem to nepustil. Nicméně jak vás tu sleduju .. Jirko nechceš tam prostě přidat trace-logy, na konfiguraci a volání metod? Možná by ti to pak ulehčilo, replikaci, nebo nalezení problému (pokud tam nějaké je, případně ukázalo, že není na tvé straně, jde o anomálii apod .. ) .. ale jen nápad, co dělám já když někde něco nejde a u mě jo (teda pokud je možné logy přidat)

tomas-knap commented 10 years ago

Problem je ten, ze pravdepodobne zalezi kdy se virtuoso restartuje. Jirko, pridal bys tam prosim log, ze ktereho bude patrne pri jake metode connection.x() se zaktivuje FaultTolerantRepository a ceka se na znovupripojeni? Tedy napr. ze to bude logovat "c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 1 attempts to reconnect while calling connection.x()" misto toho stavajiciho "c.c.m.x.o.r.impl.FailureTolerantRepositoryWrapper - Database is down after 1 attempts"

Jirko, jinak viz ty moje otazky v prechozim prispevku, odpovez na ne prosim

tomesj commented 10 years ago

K odpovědím Nastavení mám stejné jako bylo na odcs-test, tedy extractor nepoužívá statistický handler a loader používá override grafu a stop v případě erroru :-)

Log do FailureTolerantRepositoryWrapper hned teď přidám :-)

tomesj commented 10 years ago

Jinak to jsem se ptal právě Petra, zda tam nastavuje správný graf. Jinak v RDFParseru problém není, ono totiž když dojde k restartu backendu a SPARQL extractor už je hotový, tak už se znova nepouští, pustí se rovnou ten SPARQL loader - exucute metoda, bohužel je jeho input prázdný (místo aby měl ty triply z extrakce)

skodapetr commented 10 years ago

JJ to jsi psal, to by snad mělo být v pořádku? Teda snažil jsem se aby to bylo lepší než předtím .. je to jisté místo, jenž by si zasloužilo přepsat .. ale je to poměrně "srdcová" záležitost backendu, tak jsem to nechtěl dělat před termínem odevzdání.

tomesj commented 10 years ago

Tak měnil jsi to předpokládám v posledním commitu, který jsi sem dával - ještě jsem neměl možnost to zkusit, ale z toho, co píšeš, tak předpokládám, že je to už tedy OK :-) I tak zkusím :-)

skodapetr commented 10 years ago

No snad .. já tu teď hodinu bojoval s mavenem a mojí neschopností přečíst jméno balíčku .. což mě docela "vyčerpalo" .. ale šlo tam o ten problém jak jsi popisoval. Potíž je, že ty dataUnit nemají žádný identifikátor .. a vlastně to, že se nedělá reset je "nové" (tak měsíc něco) .. tedy co jsem udělal je, že pokud už dataUnit daného jména a typu existuje tak se reusuje.

Tedy by to mělo vyřešit tento problém, zároveň pokud by jsi vytvořil 2 výstupní dataUnity .. stejného jména a typu, tak by měli ukazovat na jednu instanci DataUnit (jednu RDF) .. což nezní tak nerozumně myslím.

tomesj commented 10 years ago

Tak bohužel ten restart backendu stále nefunguje korektně a data se ztrácí:-(

skodapetr commented 10 years ago

A stačí mi na replikaci DB pedia? asi ano že jo? jen jí musím stihnout zastavit .. při vykonávání loaderu?

tomesj commented 10 years ago

Je to asi jedno jakou máš pipeline - pochopitelně čím více trojic tím lépe - musíš to stihnout zastavit ještě během loadování. Můžeš zkusit si udělat tuhle ukázkovou - tam má 5000 trojic a tam to určitě stihneš :-)

skodapetr commented 10 years ago

Byl tam problém .. že to míchalo vstupní a výstupní při reloadu .. ale nevím zda to hraje roli, neb jediné o co šlo bylo, že se to načetlo 2x. Přidal jsem tam nějaké logy .. každopádně je to v experimental branch .. tedy pokud by jsi tam mohl přejít a podívat se na to tam?

Možná mě napadlo, pro debug účely by jsi mohl uděla to, že pokud je Virtuoso vytvořena, tak zaloguješ kolik už obsahuje trojic. A pokud jsi hodně paranoidní a nevíš si s tím rady .. tak bych počet trojic sypal klidně pravidelněji .. pak to teda budeš muset odstranit ale, zase by to mělo opravdu ukázat co se tam děje.

tomesj commented 10 years ago

Mohl bys to prosím přidat i do masteru, ať se hneme z místa k řešení. Když ladím problém v masteru a je tam problém v restartu backenedu, tak to, že jsi to opravil v větvi experimental je sice hezké, ale pro mě úplně na prd.

Můžu ti přidat logy o počtu prvků v grafu při inicializaci virtuosa - jestli ti to k něčemu bude :-)

skodapetr commented 10 years ago

No k tomu také dojde .. neb master bude release 0.0.5 a exp. bude master :)

tomas-knap commented 10 years ago

Jirko, uz bys mel mit vse v master branch

On Wed, Jan 1, 2014 at 11:28 PM, Petr Škoda notifications@github.comwrote:

No k tomu také dojde .. neb master bude release 0.0.5 a exp. bude master :)

— Reply to this email directly or view it on GitHubhttps://github.com/mff-uk/ODCS/issues/877#issuecomment-31431722 .

tomesj commented 10 years ago

Tak zkoušel jsem restart backendu a vypadá, že je to OK - data už se neztrácí. Takže to bychom měli vyřešené.

Zkoušel ještě jednou zrekonstrovat problém s výpadky, ale u mě stále vše OK (všechna data se následně extrahují i nahrají v pořádku) po nahození DB, tedy žádný problém.

Potřeboval bych tedy, aby jsi to @tomas-knap zkusil zreprodukovat u sebe a napsal mi, kde zjistil problém.

tomas-knap commented 10 years ago

Jirko, fakt to nefunguje. Podivej se na odcs-test, pipeline "Extractor nsoud test restart". Na te pipeline je jeden extractor, ma extrahovat 20 000 triples. Pustim to se statistical and error handlerem, jak je videt z logu, tak RDF db zabiju po 4400 triples loaded :

screen shot 2014-01-03 at 8 21 59 pm

Nasledne db restartuju, dobehne. Ale kdyz pak prochazim vysledky, tak dostanu:

screen shot 2014-01-03 at 8 22 59 pm

Tedy tech cca prvnich 4400 triples tam neni.

tomas-knap commented 10 years ago

Nefunguje to ani bez statistical and error handleru, taky jsou ta data nekompletni, coz neni prekvapive

tomas-knap commented 10 years ago

Neda se nic delat, musis si to odladit primo na odcs serveru. Zalozim ti ucet a poslu mailem.

Kdyz chces zabit RDF databazi, staci volat: sudo /etc/init.d/virtuoso-opensourceETL-RDF stop

Pro nastartovani: sudo /etc/init.d/virtuoso-opensourceETL-RDF start

Doporucuju postup jako ja - tedy pustis si v debug rezimu tu pipeline, pak zabijes rdf db, pustis, zkonstrolujes v logu, ze ses trefil. Pak kouknes pres browse/query kolik tam mas triples.

tomesj commented 10 years ago

Dobrý nápad, díky za účet i přístupové údaje :-) Takhle na serveru konečně tu chybu budu moci detekovat i já :-)

tomesj commented 10 years ago

Pipeline Extractor nsoud test restart s daným nastavením na odcs-test nemá v tom nastaveném grafu aktuálně žádné trojice

tomesj commented 10 years ago

Tak u sebe mám stejné nastavení jako na odcs-test, ale ten graf je aktuálně prázdný - prosím tedy o nastavení pipeliny (grafu) pro test výpadků DB, abych měl nějaká RDF data a mohl to zkusit.

Díky

tomas-knap commented 10 years ago

pouzij prosim stejny graf, ale na konec jmena toho grafu pridej "/backup"

2014/1/5 Jiří Tomeš notifications@github.com

Tak u sebe mám stejné nastavení jako na odcs-test, ale ten graf je aktuálně prázdný - prosím tedy o nastavení pipeliny (grafu) pro test výpadků DB, abych měl nějaká RDF data a mohl to zkusit.

Díky

Reply to this email directly or view it on GitHubhttps://github.com/mff-uk/ODCS/issues/877#issuecomment-31614730 .

ghost commented 10 years ago

wontfix, pipeline restart from point where all data was correct will resolve