Open mgeplf opened 7 months ago
All modified and coverable lines are covered by tests :white_check_mark:
:exclamation: No coverage uploaded for pull request base (
main@376c0a8
). Click here to learn what that means.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I think more changes are necessary to make the code really generic.
Instead of taking a realigned-slices=atlas_densities/app/data/markers/realigned_slices_ccfv2.json
as input parameter, it should take all the JSON files attached to the 4 input genes consumed by this step of the pipeline, i.e. gene_vip-metadata.json
, gene_gad67-metadata.json
, gene_pv-metadata.json
, and gene_sst-metadata.json
. Those 4 files should then be converted and gathered into a realigned_slices_ccfv2.json
according to Dimitri's code.
i.e. gene_vip-metadata.json, gene_gad67-metadata.json, gene_pv-metadata.json, and gene_sst-metadata.json
That can be done; I think that would obviate the need for the marker id
parameter to --marker
since the paths you provided previously were of the style `ccfv2/$marker-id-metadata.json.
Those 4 files should then be converted and gathered into a realigned_slices_ccfv2.json according to Dimitri's code.
Which code are you talking about?
That can be done; I think that would obviate the need for the marker id parameter to --marker since the paths you provided previously were of the style `ccfv2/$marker-id-metadata.json.
Maybe a folder path could be given? With expected filenames in it?
Which code are you talking about?
The code Dimitri shared on the atlas channel. See with @drodarie, he created such a convertion.
Maybe a folder path could be given? With expected filenames in it?
No, we should always make things explicit, so it's clear what is required.
The code Dimitri shared on the atlas channel. See with @drodarie, he created such a convertion.
I assume you mean this code: https://bluebrainproject.slack.com/archives/C016G4U2C8H/p1708092444378119
If we're just pulling values out of the .json files, I can change how that's handled to directly pull them out of $marker-id-metadata.json.
. The other parts of that code (changing from npy to nrrd) are better handled as a separate step.
I created a script to merge all the json results from Deep Atlas into a single json file that could be consumed by the atlas-densities pipeline. I agree with @Sebastien-PILUSO that having one json per experiment instead of adding an extra step is better.
However, IMO, I think this idea of removing configuration file goes in the wrong direction with respect to the future of the pipeline. Let's imagine you decide to change the composition rule for inhibitory neuron: now it is GAD2 = PV + SST + GENE4 or worse if you want to use T-types. Then, you would have to refactor everything. Now don´t get me wrong, we would have to refactor a lot already but this is not going in the right direction.
Apart from that, I do not mind too much the changes proposed here. This is just a change of interface.
However, IMO, I think this idea of removing configuration file goes in the wrong direction [...]
The same information is conveyed in this, it just means that one doesn't need to change a file to make changes to which files are used; and it's one less file to keep track of.
If there are big changes, like you said, we'll have to refactor everything regardless.
So how about this correction of the code for adding the input json files as input?
So how about this correction of the code for adding the input json files as input?
I have been away on vacation, and am currently catching up.
To confir, the final proposal for the command line is:
--marker=pv:PATH/TO/pv-metadata.json:PATH/TO/pvalb.nrrd
--marker=sst:PATH/TO/sst-metadata.json:PATH/TO/SST.nrrd
--marker=vip:PATH/TO/vip-metadata.json:PATH/TO/VIP.nrrd
--marker=gad67:PATH/TO/gad-metadata.json:PATH/TO/gad1.nrrd
--cell-density-standard-deviations=atlas_densities/app/data/measurements/std_cells.json