Closed curiousa closed 5 years ago
My guess would be that the "database is locked" part of the error message is most relevant. I'm not sure exactly how you are scaling up the run in your cluster environment, but if you are still using run.py, you might want to try setting use_multiprocessing to use_multiprocessing = False
at the top of the script. I am guessing that multiple processes are trying to access the same sqlite3 file at the same time, giving the database is locked error. You also might want to consider trying the Rosetta output command directly in the cluster environment without using the run.py wrapper script, or by using your cluster-specific submission script.
Thanks, Kyle. At this point I haven't been scaling up anything. I was just trying to run the wrapper script run.py in linux environment. Use_multiprocessing = False didn't help the problem. I also tried running the commands run.py outputs in the terminal prompt directly on the cluster, but they result in the same error. Do you have any other ideas?
Could it be that sqlite3 built into Rosetta is defective? Is there any way I can force rosetta to use the locally installed version of sqlite3? (shooting in the dark here..)
Hello there,
I wanted to use flex ddG to estimate effect of mutations. Given that a single mutation in my system takes ~4 hours with optimal parameters (as estimated from a successful run of flex ddG with rosetta prebuilt binaries on MacOS), I tried scaling up and running it on the cluster with linux distribution of rosetta. When I execute run.py script on linux I get errors when writing database files.
I tried using 1) prebuilt rosetta binaries for linux; 2) then I compiled the source code myself using gcc/6.1.0 and python/2.7.14 on linux to ensure it's compatible with environment. I had exactly the same error (see below) when run.py was using rosetta_scripts executable path from either prebuilt or compiled rosetta distribution. I would appreciate if you could help me figure out what could be a source of this problem.
Thank you, Alina
Here's the error I get when I tried running tutorial example on linux:
protocols.features.ReportToDB: Reporting features for 427 of the 427 total residues in the pose 1JTG_AB_0001 for batch 'structr eport'. basic.database.schema_generator.Schema: [ ERROR ] ERROR writing schema after retry. CREATE TABLE IF NOT EXISTS protocols( protocol_id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, specified_options TEXT, command_line TEXT, svn_url TEXT, svn_version TEXT, script TEXT);
basic.database.schema_generator.Schema: [ ERROR ] database is locked ERROR:: Exit from: src/basic/database/schema_generator/Schema.cc line: 260 BACKTRACE: [0x58c6188] [0x5574ac4] [0x1e669ee] [0x1e710eb] [0x1e73b50] [0x3544771] [0x354593b] [0x35460be] [0x25d8983] [0x3544771] [0x354593b] [0x35460be] [0x35aa058] [0x35abb61] [0x364e6d8] [0x410acb] [0x5e0bee4] [0x6151fd] protocols.rosetta_scripts.ParsedProtocol: [ ERROR ] Exception while processing procotol:
File: src/basic/database/schema_generator/Schema.cc:260 [ ERROR ] UtilityExitException ERROR:
protocols.rosetta_scripts.ParsedProtocol: [ ERROR ] Exception while processing procotol:
File: src/basic/database/schema_generator/Schema.cc:260 [ ERROR ] UtilityExitException ERROR:
protocols.jd2.JobDistributor: [ ERROR ]
[ERROR] Exception caught by JobDistributor for job 1JTG_AB_0001
File: src/basic/database/schema_generator/Schema.cc:260 [ ERROR ] UtilityExitException ERROR:
protocols.jd2.JobDistributor: [ ERROR ] protocols.jd2.JobDistributor: [ WARNING ] 1JTG_AB_0001 reported failure and will NOT retry protocols.jd2.JobDistributor: no more batches to process... protocols.jd2.JobDistributor: 1 jobs considered, 1 jobs attempted in 29 seconds Error: [ ERROR ] Exception caught by rosetta_scripts application:
File: src/protocols/jd2/JobDistributor.cc:329 1 jobs failed; check output for error messages Error: [ ERROR ]