p.5: regarding significant changes you made after your first training event, I am not entirely clear what you mean by "to bring teams of researchers to a hackathon or training event rather than form teams from multiple institutions"--what does the latter part mean? Did you describe it in more detail in an earlier paper that just needs to be referenced here? If not, I think a little bit more description would be helpful. (Actually, I see you explain it a little further down, but I was pretty lost for a while wondering what you meant at this point, so maybe just moving th explanation closer to the bullet point would avoid confusing the reader.)
p.6: I think it would help a reader to understand why you say having computer scientists was "more difficult than anticipated". The language is vague and this section would probably benefit from a little more explanation of what the difficulties specifically were. Why did you anticipate that it would be difficult? In what ways was even more difficult than you expected? Vocabulary? Lack of pedagogical training? Arrogance? Other things entirely?
p. 8: "It is not feasible to manually annotate or interpret imagery at large spatial scales"--I think this needs some clarification. Do you mean it's hard to annotate a lot of images manually, or that you can't annotate large-spatial-scale imagery? Or both? Also, depending on what your intention is, it might be more correct to use the term high-resolution imagery (rather than "large scale") I think this section is really intriguing and would benefit from a little more explanation of the use case(s) you are considering.
p. 9: I strongly agree with your statements about sustained investment in collaboration. It's probably true of all truly multidisciplinary collaboration, since really successful collaborations require committed, conscious leadership together with team-building, jargon vocabulary-sharing (and translation!) and willingness to trust in the expertise of others especially when it only has limited or no overlap with your own. Highly-functioning collaborative teams truly are greater than the sum of their parts. But they take time and investment.
At the bottom of p. 9, typo: "...RADAR processing. resources."
One additional question that is bothering me in this whole area involves the high expectations that we seem to be implicitly placing on short-term training experiences. I am a little concerned that we are overestimating the value of such experiences with regard to student retention of materials. I am often left unsatisfied by discussions of how enjoyable short workshops are (which is true for many participants) but that brush aside any measures of retention--probably because it's so hard to measure. I wonder if this might not be an additional point to include, especially since NSF seems to always care about metrics in proposals (and rightly so, if they're going to fund something, we should be able to at least plan to try to measure why it was worth the investment).
And finally, just a general observation I had at the workshop in Davos: I actually think this is a general problem for carpentry-style workshops, not limited to polar-science oriented ones: trainers really need to be prepared for a huge variation in computing experience, and to have strategies prepared for not losing the people with rudimentary experience, at the same time that they provide the material and opportunity for more computationally-savvy students to make appropriate progress in their own knowledge.
p.5: regarding significant changes you made after your first training event, I am not entirely clear what you mean by "to bring teams of researchers to a hackathon or training event rather than form teams from multiple institutions"--what does the latter part mean? Did you describe it in more detail in an earlier paper that just needs to be referenced here? If not, I think a little bit more description would be helpful. (Actually, I see you explain it a little further down, but I was pretty lost for a while wondering what you meant at this point, so maybe just moving th explanation closer to the bullet point would avoid confusing the reader.)
p.6: I think it would help a reader to understand why you say having computer scientists was "more difficult than anticipated". The language is vague and this section would probably benefit from a little more explanation of what the difficulties specifically were. Why did you anticipate that it would be difficult? In what ways was even more difficult than you expected? Vocabulary? Lack of pedagogical training? Arrogance? Other things entirely?
p. 8: "It is not feasible to manually annotate or interpret imagery at large spatial scales"--I think this needs some clarification. Do you mean it's hard to annotate a lot of images manually, or that you can't annotate large-spatial-scale imagery? Or both? Also, depending on what your intention is, it might be more correct to use the term high-resolution imagery (rather than "large scale") I think this section is really intriguing and would benefit from a little more explanation of the use case(s) you are considering.
p. 9: I strongly agree with your statements about sustained investment in collaboration. It's probably true of all truly multidisciplinary collaboration, since really successful collaborations require committed, conscious leadership together with team-building, jargon vocabulary-sharing (and translation!) and willingness to trust in the expertise of others especially when it only has limited or no overlap with your own. Highly-functioning collaborative teams truly are greater than the sum of their parts. But they take time and investment.
At the bottom of p. 9, typo: "...RADAR processing. resources."
One additional question that is bothering me in this whole area involves the high expectations that we seem to be implicitly placing on short-term training experiences. I am a little concerned that we are overestimating the value of such experiences with regard to student retention of materials. I am often left unsatisfied by discussions of how enjoyable short workshops are (which is true for many participants) but that brush aside any measures of retention--probably because it's so hard to measure. I wonder if this might not be an additional point to include, especially since NSF seems to always care about metrics in proposals (and rightly so, if they're going to fund something, we should be able to at least plan to try to measure why it was worth the investment).
And finally, just a general observation I had at the workshop in Davos: I actually think this is a general problem for carpentry-style workshops, not limited to polar-science oriented ones: trainers really need to be prepared for a huge variation in computing experience, and to have strategies prepared for not losing the people with rudimentary experience, at the same time that they provide the material and opportunity for more computationally-savvy students to make appropriate progress in their own knowledge.