Closed golosio closed 3 years ago
Thanks again for reporting this! Your first fix looks good to me, I'll correct this with the default parameter of K_stable
(of course you are also most welcome to make a PR if you want to!).
Regarding the follow up issue: did you run the required simulations as specified in the README? Naively, it looks to me like the simulation output is missing. Just to prevent problems: for the simulations, you need access to HPC resources; in particular, you need a combined ~1.5 TB RAM to instantiate the connectivity in memory. As an alternative, the original simulation data is available.
Thank you @AlexVanMeegen! I ran the downscaled simulation (run_example_downscaled.py) on a workstation. Clearly I am aware that it will not provide realistic distributions, but isn't it suitable for producing the plots, even if they are not realistic? I just started to run full scale simulations on the JUSUF cluster in Juelich; I will let you know if those simulations go well. However there is something that I still do not understand. It seems that the simulations write the spikes in gdf files, e.g. a263375d575756279cc32ea0e049e1ae-spikes-00001-06.gdf where 06 is the local thread index, while the scripts in the figures/Schmidt2018_dyn folder are looking for spike recordings in a npy format, as spikes_V16E.npy I could not find any piece of code that extracts the spikes from the gdf files and convert them in the spikes{area}_{pop}.npy format...
Ah, I see. In principle, i agree it could also work with a downscaled simulation but I haven't tested that. Would have to check in detail how much some things are baked into the snakemake workflow.
Regarding your question: This exception is called if no npy files are found. First, probably this is not documented properly - can you pinpoint a place where you would have expected this information? Second, and I assume this was your point, this does not create recordings/spikes_{area}_{pop}.npy
files but recordings/{hash}-spikes-{area}-{pop}.npy
files. Do you have those for the downscaled simulation?
Thank you again @AlexVanMeegen. I will give you my suggestion about documentation as soon as I have some results from the fullscale simulation, first I need to understand the differences in the output in the two cases, downscaled and fullscale. I do not have the recordings/{hash}-spikes-{area}-{pop}.npy files for the downscaled simulation, just the gdf files. My question is, which piece of code produces those recordings/{hash}-spikes-{area}-{pop}.npy files? Because I was able to find scripts that try to read this kind of files, but not the code that should produce them.
The exception that I linked above should produce the npy files, here is the final np.save
. Or am I missing your point?
That's exactly what I was looking for , thank you! I see now, that piece of code checks if the npy files exist, and if not it reads the gdf files and convert them into the npy files. Sorry, from your previous answer I thought the exception was just to give some error message, I did not realize that it could also create the npy files.
Yep, precisely. Sorry, in hindsight my comment was pretty ambiguous.
Maybe one more point: since this is in the analysis class, you want it to be initialized to trigger the conversion gdf -> npy. Thus, you probably want to pass analysis=True
to the MultiAreaModel
class.
Sorry again. If I pass analysis=True to the MultiAreaModel class in the script run_example_fullscale.py, I receive the error: FileNotFoundError: [Errno 2] No such file or directory: '/p/scratch/icei-hbp-2020-0007/mam/aa6370000042084c4725111b24007734/recordings/network_gids.txt' Apparently, the script attempts the conversion gdf -> npy before the job is completed and the network_gids.txt and gdf files are created. Should I use two separate scripts, first one for running the full scale simulation, second after the simulation is finished for doing the analysis? In this case, do you have a template and/or recommendations about this second script? What is a clean way to make this script use the label/path of the previous simulation?
Apologies for the late answer, hope it is still relevant. Yes, you are right, one first has to instantiate the class with analysis=False
, run the simulation, and then instantiate the class with analysis=True
. If you are running the simulation on HPC, I think the most sensible option is to use two scripts. Alternatively, you could adjust run_simulation.py and add a line in the end to instantiate the class with analysis=True
. However, this has the drawback that the analysis gets executed within the same job as the simulation although its hardware requirements are completely different.
All handling of labels / path should be done under the hood. Thus, after running the simulation, you can instantiate
M = MultiAreaModel(network_label, simulation=True, sim_spec=simulation_label, analysis=True)
in the 'second script'. Put differently, the intended way to handle labels is to pass them to the MultiAreaModel
class.
For a minimal example (in one script), see e.g. test_analysis.py.
Thank you for you answer @AlexVanMeegen. We were able to run the simulations on the JUSUF cluster. I'll try to run the analysis following your instructions.
Great to hear! Let me know how it works out.
Apologies for all these hiccups, I did the code review for this project when I was still young and naive so I am at least partly responsible ^^
Thank you @AlexVanMeegen. I think this issue can be closed. The first part has been solved with last commits, while the second part is more a matter of documentation. I will raise a new issue with suggestions for improving documentation.
Thanks for reporting this, @golosio. Suggestions for the documentation would be highly appreciated!
in the directory figures/Schmidt2018_dyn, after setting LOAD_ORIGINAL_DATA = False in all python scripts and in the Snakefile, and setting apptopriately the variable chu2014_path in helpers.py, running $ snakemake I got the error: TypeError in line 94 of /home/golosio/multi-area-model/multi-area-model/figures/Schmidt2018_dyn/Snakefile: join() argument must be str or bytes, not 'list' File "/home/golosio/multi-area-model/multi-area-model/figures/Schmidt2018_dyn/Snakefile", line 94, in
File "/usr/lib/python3.7/posixpath.py", line 94, in join
File "/usr/lib/python3.7/genericpath.py", line 153, in _check_arg_types
Apparently, ifin the Snakefile I replace os.path.join(DATA_DIR, SIM_LABELS['Fig3'], ... with os.path.join(DATA_DIR, SIM_LABELS['Fig3'][0], ... in all places where it occurs, the previous error does not appear, however I receive another error: Building DAG of jobs... MissingInputException in line 37 of /home/golosio/multi-area-model/multi-area-model/figures/Schmidt2018_dyn/Snakefile_preprocessing: Missing input files for rule pop_rates:
/home/golosio/multi-area-model/data/157be7f2609cd1843099e7a6b4e8b218/recordings/spikes_VIP_5E.npy
/home/golosio/multi-area-model/data/157be7f2609cd1843099e7a6b4e8b218/recordings/spikes_V1_6E.npy
......