Closed mpompolas closed 5 years ago
So after some discussion about having to support different yaml files and how it can be automated from an analysis-software-support perspective, we had 2 ideas for the yaml files:
With regard to 2 (i.e., embedding of the YAML in the nwb files), this is already supported. PyNWB (and I believe also MatNWB) can write the schema to the NWB:N file.
With regard to 1 (i.e, a repository where users can get the templates), I assume with templates you mean the YAML extension files. We are actively working on creating a library for extensions so that users can easily share, access, review, and install extensions. I would expect this to come online in the next 2-6month.
@oruebel
What's the right way to embed the schema?
Regarding the Java object call, if you ensure that the full path for
@nclack here is an example of an NWB file with a cached spec
What's the right way to embed the schema
I've added a description for this here https://github.com/NeurodataWithoutBorders/nwb-schema/pull/256/files This used to be part of the specification at some point but removed there because caching the spec is somthing that will likeley have to be handled differently for different backends. AS such, this has moved to the HDF5 backend. We wrote this up at some point, but it looks like it somehow did not make it into the docs. Let me know in case the description should not be clear.
@bendichter Related to #42. The schema may also need updating to support the format in the example file as MatNWB will expect SpecFiles instead.
@ln-vidrio good point, there does seem to be a mismatch between the schema and the file that is output. The schema defines the specifications
group in general
, whereas this file defines it in root. In the schema. The contents of the specification
group is totally different. This file's specifications
does not use the SpecFile
spec defined by the schema but uses its own group hierarchy. We'll have to sort this out on the pynwb end.
Closing as the original issue is resolved. The cached spec issue to follow is #149.
I'm trying to automate the importing of NWB files in Brainstorm. Brainstorm is written in Matlab and Java. The way that I designed the importing step, is to check if the NWB core files have already been initialized in a designated path, and if they haven't, the importer will automatically pull the NWB core files from GitHub, unzip them, and run the generateCore. However, since Brainstorm is written in Java and a Java object is already open, the generateCore fails due to a clear-Java-object call it makes. My way around this was to ask users to restart Matlab and ultimately run the generateCore while Brainstorm initializes (before the Java interface opens). So for now, users that want to run the default yaml provided from the NWB master fork, are good to go.
However, once users will start using their own yaml files, things will get a bit more complicated. Recently I worked on an ECoG dataset provided by @bendichter, and this dataset needed a different .yaml to be initialized. Automating that extra yaml initialization can be done, but a more rigid approach is needed regarding the yaml files.
So after some discussion about having to support different yaml files and how it can be automated from an analysis-software-support perspective, we had 2 ideas for the yaml files:
Do you guys have any plan in mind on which direction the .yaml files support will follow?
Finally, regarding the clear Java object call, is there a way for this to be avoided by generateCore? The issue has been described here: https://github.com/brainstorm-tools/brainstorm3/issues/181
Thanks, Konstantinos