livecomsjournal / livecomsjournal.github.io

Content for policy/instructional pages of the Living Journal of Computational Molecular Science (LiveCoMS)
https://livecomsjournal.github.io
6 stars 15 forks source link

What are the editorial policies for comparisons of molecular simulation programs? #30

Open mrshirts opened 7 years ago

mrshirts commented 7 years ago
davidlmobley commented 7 years ago

I don't think the language here is clear. You're talking about comparisons of simulation packages, yes? And when you say "one member of each community" you mean a member of the developer community?

When presubmission letter comes in, can ping lead developers for OK? Or require authors to get buy-in (probably should push as much as possible to authors)?

Should require authors to get buy-in, specify who they talked to, etc.

Do we allow comparing same simulations on different architectures?

What do you mean? Comparing computations of the same property across different architectures? I'm not imagining a case offhand where this would be important, but I also wouldn't want to say it's NOT allowed in case someone has a case where it IS important. I think they'd have to make that case and we would deal with it when they do, at the level of the presubmission letter.

Editorial Board discussion: Do you need lead developers for community codes to assent? Or just any main developer/PI involved in a community code? Lead and main developer may be fuzzy.

Point of order: Are we using GitHub for editorial board discussion (most probably won't be monitoring it?) or are you e-mailing these out to people and asking for opinions, or asking them to visit the GitHub issue and comment?

I think this has to be fuzzy to some extent since there is no formal designation of such things. We probably want to say something like "a major developer or the major developers as appropriate". Unless we're going to provide a LIST of who would be acceptable I don't know how to deal with it other than leaving some fuzziness here.

Probably the only firm rule should be that authors have to have such a person involved, and they should say who they have (and why they were chosen) at the level of the presubmission letter so we can object if we think the person isn't appropriate.

mrshirts commented 7 years ago

Responding just to this first?

Point of order: Are we using GitHub for editorial board discussion (most probably won't be monitoring it?) or are you e-mailing these out to people and asking for opinions, or asking them to visit the GitHub issue and comment?

We had discussed using GitHub issues as the forum to nail the policies down. Exactly how to do communication is something we need to resolve. We could debate them in email instead, though we lose the track. Presumably when the policies get mostly worked out, a document is added that can then be edited more finely.

I will be emailing the editors in a bit, asking them to visit here - unless, of course, we want to change the communication flow model.

davidlmobley commented 7 years ago

I'm fine with this. I just want to be realistic about how many people are monitoring GitHub at this point (it's you, me, Justin Gilmer, and Eliseo -- which is to say, none of the rest of the editorial board), so we're going to have to get them on here.

Probably the thing to do is to e-mail with links to the issues requiring discussion and also ask for their GitHub IDs (we can make them collaborators on the repo).

mrshirts commented 7 years ago

Yes! I was still composing the email. I wanted to get the issues transferred from the workshop docs to here before I tried to send people to the page with nothing on it.

mrshirts commented 7 years ago

I don't think the language here is clear. You're talking about comparisons of simulation packages, yes?

Correct. Fixed.

And when you say "one member of each community" you mean a member of the developer community?

Correct. Fixed.

mrshirts commented 7 years ago

Do we allow comparing same simulations on different architectures? What do you mean? Comparing computations of the same property across different architectures? Yes. I'm not imagining a case offhand where this would be important, but I also wouldn't want to say it's NOT allowed in case someone has a case where it IS important. I think they'd have to make that case and we would deal with it when they do, at the level of the presubmission letter.

Right, so we just have to be clearer in what we define the scope of simulation package comparisons that we accept.

davidlmobley commented 7 years ago

Right, so we just have to be clearer in what we define the scope of simulation package comparisons that we accept.

I'm not actually sure that this is an issue of "be clearer" as I think what we're saying is that we accept "simulation package comparisons" where this might include comparisons of different simulation packages, or comparisons of the same simulation package across different architectures, etc. Have we come up with something it does NOT include? If not then I'd just say we are interested in such comparisons, and sort it out at the presubmission letter stage. If we find we need a more specific policy because we're getting lots of letters describing something we think is out of scope, we can add it then. But at this point we just want to be flexible, I think.

HaoZeke commented 6 years ago

Given the large variation in use cases and also hardware capabilities, it's doubtful that any of the existing developers for lammps, gromacs, espresso or the rest will come to a consensus on the limitations and capabilities of each software.

Maybe a speed comparison or a feature (?) comparison would be better?

I believe the closest there is to such an idea or a comparison is the backlinking done by the lammps site to similar software.

Many packages are not formally part of the main simulation program (eg. pylammps and lammps, moltemplate and lammps, mbtools and espresso) and have their own developers which would further complicate matters.

Perhaps it would be better to leave these comparisons to be user (community?) approved instead of trying to get the developers for validation?

davidlmobley commented 6 years ago

@HaoZeke :

it's doubtful that any of the existing developers for lammps, gromacs, espresso or the rest will come to a consensus on the limitations and capabilities of each software.

It's not so much that we want them to come to consensus as that we think that having people benchmark software without any involvement of developers of that software will often be a recipe for developers just responding to the study by saying, "But they did X, Y, and Z wrong, and they should have asked us first..." so we hope that people comparing programs will at least make sure with a developer that they are doing things "right".

Is there something in the language we have up which makes it seem like we're saying we expect a consensus? If so we would want to correct this.

Perhaps it would be better to leave these comparisons to be user (community?) approved instead of trying to get the developers for validation?

Also, we absolutely want the user community to do these types of studies and we are NOT expecting the developers to necessarily do validations/comparisons; we realize developers may not have time and interest to do this. We're just trying to avoid the scenario where someone does a comparison and then the editors disavow it/complain loudly/point out all the reasons it is invalid and should be ignored, etc. Our thought was that the best way to do this is to ask users doing validation to have a developer sign on to provide assistance or check what they plan on running or similar.

We're very much open to suggestions on how best to allow and incentivize community involvement in validation while avoiding the above issues; even specific suggestions on wordings may be helpful if you have ideas.

davidlmobley commented 6 years ago

@jppiquem asks these questions (in addition to providing some updated proposed text below):

Other things can be added but maybe should we observe first how people will react to this type of manuscripts. I think one key thing is to have representative authors linked to developments really discussing without bias the real life performance of their algorithms and softwares. We may have to say no to isolated groups with narrow interest on their methodology (there are plenty of journals for such papers and maybe we are not the place). This is to be discussed. I'll be online for the next meeting.

He proposes something like this text as an update (major changes in boldface)

What is a publishable comparison for molecular simulation packages?

Simulation comparison papers describe attempts to perform the same calculation with a range of different simulation programs. Such comparison should be updated periodically with different versions of the same programs (or potentially additional programs). Various types of code comparisons can be can considered: i) speed, scalability and large-scale applicability comparisons of a given set of algorithms and analysis of their implementation on different software platforms; ii) precision and stability issues of simulations using new or improved algorithms or using implementation on different modern computational platforms (CPUs, GPUs, FGPAs…). Although recommended, this classification is not strict and authors are welcome to propose other meaningful comparisons of interest for the modeling community.

Additional factors considered in review of comparisons of molecular simulation programs

Revision schedule for comparisons of molecular simulation programs

dmzuckerman commented 6 years ago

I want to raise an issue about scope/clarity that @davidlmobley and @mrshirts discussed earlier. The opening description is: "Simulation comparison papers describe attempts to perform the same calculation with a range of different simulation programs."

I suggest amending this to: "Simulation comparison papers describe attempts to perform the same calculation, or calculate the same quantity with a range of different simulation programs or methods/algorithms."

Perhaps we can also clarify this bullet point in the Additional Factors: "Are best practices being used in the simulation comparisons?"

Could be amended to: "Are best practices being used for each package/platform/method and for data analysis in the simulation comparisons?"

davidlmobley commented 6 years ago

@jppiquem Should we go ahead and get these changes made?

mrshirts commented 6 years ago

If we want to have a "calculate the same quantity with methods/algorithms" we should probably feature that a little bit more on the website. We probably want to make sure such articles are clear scope -- "calculating potentials of mean force of X in a lipid bilayer." if X is a bunch of small molecules, you probably only want one paper. But what about peptides? Is that different enough from a small molecule to be a different quantity? Do we handle that at the presubmission letter phase, that they convince the editors that it's sufficiently different from other comparisons?

dmzuckerman commented 6 years ago

I agree the scope should be checked at the presubmission stage. I think that's clear or at least implicit for all our articles, but it doesn't hurt to re-iterate. After all, one thing that's quite different about our journal is that we're aiming to have one paper per topic, which is quite different from the standard model.

davidlmobley commented 6 years ago

I'd think most of this should be dealt with at the presubmission stage; having too many firm policies early will inhibitor our flexibility when we need it.