stivalaa / culture_cooperation

Culture and cooperation in a spatial public goods game
GNU General Public License v3.0
3 stars 1 forks source link

No results to me . #3

Closed Frostjon closed 5 years ago

Frostjon commented 5 years ago

i used this command "g++ model.cpp -o model -I /home/zh/lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/src/ -std=c++11" to compile the model. And this moment no error happened to me ,but the project seems to stuck, this is the echo:

[zh@localhost ~]$ mpirun --mca mpi_warn_on_fork 0 python ./lattice-python-mpi/src/axelrod/geo/expphysicstimeline/multiruninitmain.py m:100 F:5 strategy_update_rule:fermi culture_update_rule:fermi ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model 10000 Psyco not installed or failed execution. Using c++ version with ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model Psyco not installed or failed execution. Using c++ version with ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model Psyco not installed or failed execution. Using c++ version with ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model Psyco not installed or failed execution. Using c++ version with ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model Clean start Writing results to results/10000/results2.csv 700 of total 700 models to run 175 models per MPI task time series: writing total 70700 time step records rank 2: 10000,30,10.000000,1.000000,0.000000,None,2,5,0.600000,0.000010,0 Clean start Writing results to results/10000/results0.csv Clean start Writing results to results/10000/results1.csv 700 of total 700 models to run 175 models per MPI task time series: writing total 70700 time step records rank 1: 10000,30,10.000000,1.000000,0.000000,None,2,5,0.600000,0.000001,0 700 of total 700 models to run 175 models per MPI task time series: writing total 70700 time step records rank 0: 10000,30,10.000000,1.000000,0.000000,None,2,5,0.600000,0.000000,0 Clean start Writing results to results/10000/results3.csv 700 of total 700 models to run 175 models per MPI task time series: writing total 70700 time step records rank 3: 10000,30,10.000000,1.000000,0.000000,None,2,5,0.600000,0.000100,0 writeNetwork: 0.539466142654 writeNetwork: 0.64174413681 writeNetwork: 0.700527191162 writeNetwork: 0.783866167068 writeNetwork: 0.658153057098 writeNetwork: 0.493187189102 writeNetwork: 0.535974979401 writeNetwork: 0.622301101685

and it is stop here .What can i do? Thanks a lot!

stivalaa commented 5 years ago

It is probably still running. It can take a very long time (days or weeks). You can use the job monitoring system to check what it is doing. Maybe try a much smaller lattice and and parameter set for testing first.

Frostjon commented 5 years ago

When I use smaller lattice and parameter set for testing. The program seems to stuck too. I used this command "mpirun --mca mpi_warn_on_fork 0 python ./lattice-python-mpi/src/axelrod/geo/expphysicstimeline/multiruninitmain.py m:4 F:5 strategy_update_rule:fermi culture_update_rule:fermi ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model 16",the echo is as follows:

[zh@localhost ~]$ mpirun --mca mpi_warn_on_fork 0 python ./lattice-python-mpi/src/axelrod/geo/expphysicstimeline/multiruninitmain.py m:4 F:5 strategy_update_rule:fermi culture_update_rule:fermi ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model 16 Psyco not installed or failed execution. Using c++ version with ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model Psyco not installed or failed execution. Using c++ version with ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model Psyco not installed or failed execution. Using c++ version with ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model Psyco not installed or failed execution. Using c++ version with ./lattice-jointactivity-simcoop-social-noise-constantmpcr-cpp-end/model Clean start Writing results to results/16/results3.csv 700 of total 700 models to run 175 models per MPI task time series: writing total 70700 time step records rank 3: 16,30,10.000000,1.000000,0.000000,None,2,5,0.600000,0.000100,0 Clean start Writing results to results/16/results2.csv 700 of total 700 models to run 175 models per MPI task time series: writing total 70700 time step records rank 2: 16,30,10.000000,1.000000,0.000000,None,2,5,0.600000,0.000010,0 writeNetwork: 0.00170707702637 writeNetwork: 0.00317406654358 writeNetwork: 0.00361394882202 writeNetwork: 0.0104489326477 Clean start Writing results to results/16/results0.csv 700 of total 700 models to run 175 models per MPI task time series: writing total 70700 time step records rank 0: 16,30,10.000000,1.000000,0.000000,None,2,5,0.600000,0.000000,0 writeNetwork: 0.00322198867798 writeNetwork: 0.0029091835022 Clean start Writing results to results/16/results1.csv 700 of total 700 models to run 175 models per MPI task time series: writing total 70700 time step records rank 1: 16,30,10.000000,1.000000,0.000000,None,2,5,0.600000,0.000001,0 writeNetwork: 0.00337886810303 writeNetwork: 0.00154304504395

And what kind of job monitoring system should I use? Is this one "https://github.com/nicolargo/glances" fine?

Thanks a lot!