I'm sorry to bother you. I have a copy of the data on the Y chromosome, which has 920 individuals and 38,323 variants and only one partition with GTR+G. According to a search of the literature, the chain length need to run about 100 million times. The configuration on my linux system is as follows: there are 224 cpus in total.
But I found that running beast on linux system is very slow, I also looked up some methods on Google. I have tried the following parameters, 1) -threads 60 , 2) -beagle_sse -beagle_instances 60, 3)-threads 30 -beagle_instances 30 ,but they all took about 30 hours/ million states. That means I need four months.
Can you give me some ideas to speed it up on linux systems. I have another question, can I run 10 results with 10 different seeds, and finally merge the log file and tree file with LogCombiner?
I'm sorry to bother you. I have a copy of the data on the Y chromosome, which has 920 individuals and 38,323 variants and only one partition with GTR+G. According to a search of the literature, the chain length need to run about 100 million times. The configuration on my linux system is as follows: there are 224 cpus in total.
But I found that running beast on linux system is very slow, I also looked up some methods on Google. I have tried the following parameters, 1) -threads 60 , 2) -beagle_sse -beagle_instances 60, 3)-threads 30 -beagle_instances 30 ,but they all took about 30 hours/ million states. That means I need four months.
Can you give me some ideas to speed it up on linux systems. I have another question, can I run 10 results with 10 different seeds, and finally merge the log file and tree file with LogCombiner?