Closed sdobbs closed 3 years ago
I believe it should. They should end up in the hddm or root directory as they have no special rules for deletion or collation. MCwrapper sends back the entire directory and at its core makes little to no assumptions as to file names. The real "issue" is on allowing that plugin to run....this requires more careful thought
So what's the potential issue on letting this plugin run? I would think that it would be better to allow saving of skims of "raw" data rather than saving all of the hit-level data for these large bggen runs. Can one run custom plugins at REST-production time?
this one....probably nothing. But what about other ones? What about optionally built plugins? Do I just shrug and say here is a text box knock yourself out? What if they put in danarest, do I need to do other processing of the line. How would such an interface work? And how do I make that automatically scaling?
That's fair. You might require a clear use case for these additional possibilities. It's probably something we want to control anyway, since a lot of low level plugins don't work on simulated data.
I was going to say that this is a more of an "expert mode" activity, but one could imagine coming up with some clever software that more people might use... I think it's unlikely that most people would want hit-level data like this, though.
the central production method allows for custom jana configs to be passed in. Additionally, all hddm (and root) files are copied back. This method should do what is wanted.....
For calibration studies, we'd like to run the BCAL pi0 skimmer on some bggen MC - so that the full smeared files aren't saved, only the pi0-related hits. The current halld_recon should handle this fine - does the current centralized production support the copying of extra skim files back?