glennhickey / progressiveCactus

Distribution package for the Prgressive Cactus multiple genome aligner. Dependencies are linked as submodules
Other
80 stars 26 forks source link

Running too long on two simulated genomes #6

Open iminkin opened 10 years ago

iminkin commented 10 years ago

Tried:

./bin/runProgressiveCactus.sh ./work3/seqfile3 ./temp3 ./work3/work3.hal

Got:

iminkin@iminkin-VirtualBox:~/Program/progressiveCactus$ ./bin/runProgressiveCactus.sh ./work3/seqfile3 ./temp3 ./work3/work3.hal

Beginning Alignment



\ ALERT **



The only jobs that I have detected running for at least the past 4200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:

Directory "temp3": https://docs.google.com/file/d/0BwczEe34tHjOZ2NXeHNpcUFhdmM/edit?usp=sharing Directory "work3": https://docs.google.com/file/d/0BwczEe34tHjOWTRfdEhXZmxhV3M/edit?usp=sharing

benedictpaten commented 10 years ago

Hi Ilya,

It's probably easier if you just send the log file (./temp3/cactus.log). It looks like their were issues starting and stopping the DB server.

Benedict

On Fri, Oct 18, 2013 at 7:20 AM, Ilya Minkin notifications@github.comwrote:

Tried:

./bin/runProgressiveCactus.sh ./work3/seqfile3 ./temp3 ./work3/work3.hal

Got:

iminkin@iminkin-VirtualBox:~/Program/progressiveCactus$ ./bin/runProgressiveCactus.sh ./work3/seqfile3 ./temp3 ./work3/work3.hal

Beginning Alignment


\ ALERT **


The only jobs that I have detected running for at least the past 4200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:

  • wait a bit. Maybe it will resume
  • look for fatal errors in ./temp3/cactus.log
  • jobTreeStatus --jobTree ./temp3/jobTree --verbose
  • check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  • if not it's probably time to abort. Note that you can (and probably should) kill any trailing ktserver jobs by running rm -rf ./temp3/jobTree They will eventually timeout on their own but it could take days.

Directory "work3": https://docs.google.com/file/d/0BwczEe34tHjOZ2NXeHNpcUFhdmM/edit?usp=sharing

Directory "temp3": https://docs.google.com/file/d/0BwczEe34tHjOWTRfdEhXZmxhV3M/edit?usp=sharing

— Reply to this email directly or view it on GitHubhttps://github.com/glennhickey/progressiveCactus/issues/6 .

joelarmstrong commented 10 years ago

Hi Ilya,

It's an odd error, but it's most likely that some ktserver processes were left hanging around from a previous run attempt.

If you change the working directory from temp3 to something else, or kill all ktserver processes, then run again, it should work fine.

iminkin commented 10 years ago

Hi Joel,

Your suggestions worked, thanks a lot!

iminkin commented 10 years ago

Getting the same error while running on 15 small genomes. A the time I started the cactusGraph, there were no ktsererver processes, I didn't use the folder "./work6/temp123" as a temp folder before as well. Log file: http://pastebin.com/iDgqcXCF

iminkin commented 10 years ago

Any comments?

glennhickey commented 10 years ago

progressive cactus and many of its submodules were updated from the development branch last week, fixing many issues. i don't know if any of the changes will help you, but it may be worth trying again with the latest version from git (it's changed enough, that i would recommend recloning and building from scratch)

On Sat, Nov 9, 2013 at 1:15 AM, Ilya Minkin notifications@github.comwrote:

Any comments?

— Reply to this email directly or view it on GitHubhttps://github.com/glennhickey/progressiveCactus/issues/6#issuecomment-28120871 .

iminkin commented 10 years ago

I recloned and rebuilt but the error persists.