Closed hawc2 closed 2 years ago
@hawc2 thanks for editing this lesson 😀
Let me know if you or the reviewers need anything from me. The Kaggle notebooks are currently private but I can add anyone who needs access - I'd just need a username.
@davanstrien I’m providing preliminary feedback on Part 2 of your Computer Vision tutorial, in light of feedback @nabsiddiqui gave you in Issue #342 for Part I: https://programminghistorian.github.io/ph-submissions/lessons/computer-vision-deep-learning-pt1.
I agree with Nabeel, this is a solid tutorial, and your explanations of complex machine learning methods is quite excellent, usually very readable and clear. For doing revisions on your two part tutorial, you should start by integrating Nabeel’s feedback for Part 1 before turning to my comments for Part 2: https://programminghistorian.github.io/ph-submissions/lessons/computer-vision-deep-learning-pt2.
In line with what Nabeel says about Part 1’s introduction,my first thought is that during your overview of Part I’s structure you should foreshadow what Part 2 will do. A broader overview of the two parts and how they interconnect will help the reader walk through each part. Part 1 should overview it all, and Part 2 should briefly rehearse (and link to) what was discussed in Part 1, before outlining what will be covered in Part 2. Make sure to link Part 2 in the intro to Part 1, and again its conclusion. On this note, the conclusion to Part 1 could include a few more sentences clarifying the transition between the two parts. Part 2 provides a useful introduction reciting what was covered in Part 1; I don’t think the intro to Part 2 needs much more elaboration, but a little more of an explanation at the start of Part 2 explaining the basic terms established in Part 1 and describing why someone would do this very long tutorial and method would help situate it for the reader.
I would also say that what Nabeel suggests about explaining concepts before introducing relevant code throughout your tutorial applies equally well to Part 2, although maybe less often.
For Part 2, I have detailed my revision suggestions below by section and paragraph:
Thanks for this great two-part lesson. Once you've revised both Part 1 and Part 2, we'll send it out for external peer review.
@davanstrien I’m providing preliminary feedback on Part 2 of your Computer Vision tutorial, in light of feedback @nabsiddiqui gave you in Issue #342 for Part I: programminghistorian.github.io/ph-submissions/lessons/computer-vision-deep-learning-pt1.
Thank you for this super thorough feedback, it's really appreciated 😀 I have time blocked out later this week to work on this so I'm hoping I will be able to integrate the current feedback from part 1 and part 2 by the end of the week (I have been known to be overly optimistic with estimating these things...)
@davanstrien I’m providing preliminary feedback on Part 2 of your Computer Vision tutorial, in light of feedback @nabsiddiqui gave you in Issue #342 for Part I: programminghistorian.github.io/ph-submissions/lessons/computer-vision-deep-learning-pt1.
Thank you for this super thorough feedback, it's really appreciated 😀 I have time blocked out later this week to work on this so I'm hoping I will be able to integrate the current feedback from part 1 and part 2 by the end of the week (I have been known to be overly optimistic with estimating these things...)
I was very optimistic with this time estimate... I have hopefully addressed the majority of the comments now. I am waiting for some of my co-authors to respond to the questions relating to the Foucault section since they wrote that. Hopefully, I can respond to those sections fairly soon.
I have included more sign-posting between the two lessons but I suspect that this will need a final review following any other suggestions from the reviewers.
Let me know if there is anything else I need to do in the meantime and thank you again for the detailed suggestions for this lesson (I realise it's a long one...)
@hawc2 we have updated the conclusion section with the aim of improving the clarity. Hopefully, this addresses all of the editorial suggestions you've made. Please let me know if I have missed anything that needs looking at before this goes out for peer-review.
davanstrien, do you have a link to the Kaggle notebook for the second lesson?
Hello all,
Please note that this lesson's .md file has been moved to a new location within our Submissions Repository. It is now found here: https://github.com/programminghistorian/ph-submissions/tree/gh-pages/en/drafts/originals
A consequence is that this lesson's preview link has changed. It is now: http://programminghistorian.github.io/ph-submissions/en/drafts/originals/computer-vision-deep-learning-pt2
Please let me know if you encounter any difficulties or have any questions.
Very best, Anisa
Hi @anisa-hawes, thank you for reaching out. I have the link to the lesson, but I don't see a link to the Kaggle notebook that has the data and code. Or is there not a notebook that goes along with this lesson like there was for part 1?
Hello @cderose. I have only moved the .md file in this case.
I understand that @davanstrien's Kaggle notebooks are hosted externally and they are set to private. Please connect with @davanstrien so they can grant you access !
@hawc2 is best placed to help with this aspect if you have any further questions.
Hi @anisa-hawes, thank you for reaching out. I have the link to the lesson, but I don't see a link to the Kaggle notebook that has the data and code. Or is there not a notebook that goes along with this lesson like there was for part 1?
The notebook should be available here: https://www.kaggle.com/davanstrien/02-programming-historian-deep-learning-pt2-ipynb. Let me know if there are any issues geting acess.
Thank you @davanstrien! Hello @cderose, here's the link ^^
@davanstrien and @anisa-hawes, thanks very much! I was able to copy the notebook over successfully and will share my notes this weekend.
@davanstrien and @anisa-hawes, thanks very much! I was able to copy the notebook over successfully and will share my notes this weekend.
Many thanks! Shout if anything I can help with at my end.
Another great lesson, @davanstrien! This one is, appropriately, the more technical of the two lessons. It very effectively highlights the many different steps and choices that are involved in preparing a dataset, training a model, and evaluating the model. Like with lesson one, I appreciate that you intersperse different examples of when and why you might do x versus y, and you link to a number of great resources for people who want to dive further into the decisions that have to be made.
My main thoughts are summarized below, but I would be happy to chat further or clarify any of my notes.
Main suggestions
As you rightly emphasize, the decisions we make when preparing a dataset and training a model should be closely tied to our end goals, but there isn't an explicit goal that we're training toward in this lesson (unlike in part one). Given the findings at the end, it seems like the use case for this lesson could be a meta one, where we're training a model with an eye toward studying how an imbalanced dataset (more images of humans than animals) might affect our model's learning. Stating a goal like that early on in the lesson can help situate the work we go on to do.
The dataset we use in the lesson has a lot of unlabeled images in it, which is very realistic. It would be helpful at a few points in the lesson if you would clarify how the unlabeled images are (or aren't) impacting the results of our model. Are we removing the unlabeled images? If we're not, how are they being validated? I tried to signal below a few spots where some of that discussion might happen.
Regarding Kaggle - in order to create the model, users will need to have enabled internet access in the notebook. To enable such access, they have to give Kaggle their telephone number. For this reason, it could be worth considering switching to a Colab notebook so that people don't have to share personal information. If internet access hasn't been enabled, users will see an error for the code in p86. Here's the StackOverflow page that helped me when I got the error:
Minor edits/thoughts
@cderose, thanks so much for this review. I plan to work through the listed suggestions either this week or early next week. For some of the other overarching points:
As you rightly emphasize, the decisions we make when preparing a dataset and training a model should be closely tied to our end goals, but there isn't an explicit goal that we're training toward in this lesson (unlike in part one). Given the findings at the end, it seems like the use case for this lesson could be a meta one, where we're training a model with an eye toward studying how an imbalanced dataset (more images of humans than animals) might affect our model's learning. Stating a goal like that early on in the lesson can help situate the work we go on to do.
I will add some more contextual information at the start of this lesson to point to these aims. One of the reasons we chose this example was to try and prepare people for the typical outcome of ML models not working as expected. Many teaching materials focus only on the 'happy path' where the model trains well and the dataset is perfect. We wanted to bring this up in the lesson since this will be even less likely to be a situation when applying ml to humanities data. I will make this a bit more explicit upfront, though, so the fact that the model doesn't perform well in some cases isn't so abrupt in the lesson.
The dataset we use in the lesson has a lot of unlabeled images in it, which is very realistic. It would be helpful at a few points in the lesson if you would clarify how the unlabeled images are (or aren't) impacting the results of our model. Are we removing the unlabeled images? If we're not, how are they being validated? I tried to signal below a few spots where some of that discussion might happen.
I will add some more discussion and integrate this into the places you suggest.
Regarding Kaggle - in order to create the model, users will need to have enabled internet access in the notebook. To enable such access, they have to give Kaggle their telephone number. For this reason, it could be worth considering switching to a Colab notebook so that people don't have to share personal information. If internet access hasn't been enabled, users will see an error for the code in p86. Here's the StackOverflow page that helped me when I got the error
Thanks for pointing this out. I think I may be able to get around the need to have internet-connected by having bundling the resnet model weights as a dataset in the Kaggle space. I think it makes sense also to have a Colab option. This shouldn't be too hard to add, but I'll hold off until the content is (almost) final before doing this.
Thanks again for this review
Things for @davanstrien to check:
$a$
is rendered correctly. Hi @cderose , thanks again for your comments. I have changed to fix most of the more minor comments. I will hold off on a few suggestions until I've got comments from @mblack884. I also plan to try and make the Kaggle process more straightforward and also add a Colab hosted version of the notebook. I will do that towards the end so it's easier to keep all of the content between the notebooks and the lesson in sync.
I really enjoyed reading and working through this lesson. The discussion of methodological decisions was generally clear, and the lesson did well in integrating short explanations of the more technical/theoretical aspects of the work. The many links throughout work well to offer users deeper reading on those topics while keeping the focus of the lesson on implementing its method.
1) A good overview of lesson objectives or learning outcomes at the start of the lesson, like the list found at the top of Part 1, would be helpful. I try to provide them for longer, complex units in the classes I teach to emphasize the relationship between theory & practice. They would also have the added benefit of making the specific contributions of the lesson more visible to people who are browsing through the PH's collections.
2) Kaggle notebook loaded and ran correctly with this lesson. I could step through the process as I read and follow the output. In the interest of usabiltiy, should add some reminder about setup (sign-in, copy/edit netbook, enable Internet under settings) to ensure that the notebook will proceed past p31. Not familiar enough with Kaggle to know if its possible to pre-download and store into the notebook itself to avoid having to simply setup and/or avoid phone verification for those who don't want to give out their #.
3) I like the conclusion as it helps to pivot the lesson's methods towards answering big research questions. But I think it might be more effective if you were to foreground the question of human/animal/landscape boundaries earlier. I'd imagine that most readers would have a project in mind when seeking out this lesson, but having a short paragraph noting that you're going to explore the fuzziness of cultural labels or boundaries between concepts using computer vision at the outset may help readers better understand the trajectory of your decisions at each stage of the process.
Other minor suggestions: p38: Briefly mention what should be improved here. No need to get into how or why here as that would distract too much from the lesson. If you want to include some recommendation of how to handle the images without labels, I'd suggest a short note or appendix at the end.
p45: Short, conceptual definition of F-beta would be helpful here. There's a more technical definition provided in p51 (which is fine as is), but I could see some potential confusion given that the topic indicated by the section heading isn't directly acknowledged until several paragraphs into the section.
p68: You start using the yellow textbox here to mark off clarifications. I was intially confused because there were similar remarks above that didn't use it. May be a good way to include short explanation of what needs to be improved in the figure near p38.
Thanks @mblack884. @davanstrien, let me know what you are thinking time line wise in terms of revision.
@mblack884 thanks so much for doing this review 🙂 @nabsiddiqui I should be able to incorporate these changes early next week. I'll let you know if there are any issues with that.
@nabsiddiqui I have incorporated the suggestions into the lesson content and made a few formatting fixes. The next step is to sync the lesson and notebooks on Kaggle and create a Colab version. I will do that tomorrow unless you think any other changes need to happen to the structure of the lessons.
Chiming in as co-editor here, we should have a discussion about what role the Kaggle and Colab notebooks will play. We've discussed as an editorial board how this is relatively new terrain for Programming Historian, and our current thinking is that the Kaggle and Colab notebook versions shouldn't replicate the PH lesson. Most of the commentary doesn't need to be included, just the basic section guides, steps, and code. Does that make sense?
On Mon, Mar 7, 2022, 11:38 AM Daniel van Strien @.***> wrote:
@nabsiddiqui https://github.com/nabsiddiqui I have incorporated the suggestions into the lesson content and made a few formatting fixes. The next step is to sync the lesson and notebooks on Kaggle and create a Colab version. I will do that tomorrow unless you think any other changes need to happen to the structure of the lessons.
— Reply to this email directly, view it on GitHub https://github.com/programminghistorian/ph-submissions/issues/343#issuecomment-1060889615, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXF4ED5KWRSMWYMO2D7PS3U6YWHDANCNFSM4WLFEP7A . You are receiving this because you were mentioned.Message ID: @.***>
@hawc2 my main reason for including the full text is to help avoid people having to switch between windows but I'm happy to defer to the editorial board on this. If you prefer to keep things separate I will make a new version of the notebooks that strip out most of the prose. I will leave some in to help provide a bit of signposting at least. Let me know if that sounds okay to you?
That sounds good. Feel free to make a spearate copy of the pared down notebooks, and we can present a near final copy of your lesson with the notebooks to the PH team. We may use it as an example in the future about how authors can balance between the two options.
I see why you want to make a Colab notebook, but I wonder if it's necessary, or if there's at least a way to link all extraneous resources to the tutorial in one place like the Kaggle environment, just so readers don't get decision fatigue . . .
On Tue, 8 Mar 2022 at 06:40, Daniel van Strien @.***> wrote:
@hawc2 https://github.com/hawc2 my main reason for including the full text is to help avoid people having to switch between windows but I'm happy to defer to the editorial board on this. If you prefer to keep things separate I will make a new version of the notebooks that strip out most of the prose. I will leave some in to help provide a bit of signposting at least. Let me know if that sounds okay to you?
— Reply to this email directly, view it on GitHub https://github.com/programminghistorian/ph-submissions/issues/343#issuecomment-1061689306, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXF4EEGEBUHHAJOA6LYRO3U644DBANCNFSM4WLFEP7A . You are receiving this because you were mentioned.Message ID: @.***>
--
Alex Wermer-Colan, PhD
Digital Scholarship Coordinator
Temple University, Scholars Studio
@hawc2 I have made a version of the lesson one notebook with the prose removed (https://www.kaggle.com/davanstrien/cleaned-01-progamming-historian-deep-learning-pt1). I have kept the headings to signpost where the code fits in the lesson. If this seems like a good approach to you/PH team I can do the same for part 2 and update the link in the lessons to point to that version of the notebook.
Yeah, this looks good. We can do a final assessment of it once your revisions on both lessons is complete. It's possible a small amount of commentary to the code could be reinserted.
Does it make sense combine Part 1 and 2 of the lessons into a single Kaggle notebook?
One concern I have with Kaggle as the host for the data, and the notebook, is sustainability. For the code, do you foresee separate maintenance issues for your notebooks, besides the lesson itself? For the data, is it possible to also store the data through either Github's large file storage option, or Zenodo? A single GIthub repo that serves as a hub for these secondary resources may also be useful.
On Tue, 8 Mar 2022 at 12:06, Daniel van Strien @.***> wrote:
@hawc2 https://github.com/hawc2 I have made a version of the lesson one notebook with the prose removed ( https://www.kaggle.com/davanstrien/cleaned-01-progamming-historian-deep-learning-pt1). I have kept the headings to signpost where the code fits in the lesson. If this seems like a good approach to you/PH team I can do the same for part 2 and update the link in the lessons to point to that version of the notebook.
— Reply to this email directly, view it on GitHub https://github.com/programminghistorian/ph-submissions/issues/343#issuecomment-1062003240, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXF4EAXK23OX6LAFYN6ZDLU66CHTANCNFSM4WLFEP7A . You are receiving this because you were mentioned.Message ID: @.***>
--
Alex Wermer-Colan, PhD
Digital Scholarship Coordinator
Temple University, Scholars Studio
Does it make sense combine Part 1 and 2 of the lessons into a single Kaggle notebook?
Happy to combine both parts into a single Kaggle notebook.
One concern I have with Kaggle as the host for the data, and the notebook, is sustainability. For the code, do you foresee separate maintenance issues for your notebooks, besides the lesson itself?
For the Kaggle, everything should be fairly stable. The Kaggle kernel will have a pinned docker file so all the underlying dependencies for the lesson will remain fixed. There is a chance that the fastai API will change but since the lesson is mainly using high-level APIs this isn't very likely.
More generally, the risk is that Kaggle as a platform no longer exists. I think this is fairly unlikely in the short to medium term but it is a possibility. My preference for using Kaggle is that it gets people up and running for this type of work fairly quickly. Working with some type of cloud is often a prerequisite for doing deep learning work. Obviously, we don't get into it in much detail here but I think it's setting people up with the right expectations about what is going to be required if they pursue deep learning further.
For the data, is it possible to also store the data through either Github's large file storage option, or Zenodo?
Yes, these datasets are both on Zenodo (https://doi.org/10.5281/zenodo.5838410 / https://doi.org/10.5281/zenodo.4487141).
A single Github repo that serves as a hub for these secondary resources may also be useful
We started something like that here: https://github.com/davanstrien/Programming-Historian-Computer-Vision-Lessons-submission I will add the links to the Zenodo there.
From my side, I think I have addressed all of the major reviewer comments/suggestions, and the remaining tasks are on the practical setup side. If you are happy to suggest Kaggle as a first option, I can get all of that in place and update the Git repository with relevant Zenodo links. I anticipate this only taking half a day or so; hopefully, we'd be ready to publish after that.
This all sounds great, Daniel. I'll chat with @nabsiddiqui https://github.com/nabsiddiqui and we'll get back to you with final steps
On Wed, 9 Mar 2022 at 08:40, Daniel van Strien @.***> wrote:
Does it make sense combine Part 1 and 2 of the lessons into a single Kaggle notebook?
Happy to combine both parts into a single Kaggle notebook.
One concern I have with Kaggle as the host for the data, and the notebook, is sustainability. For the code, do you foresee separate maintenance issues for your notebooks, besides the lesson itself?
For the Kaggle, everything should be fairly stable. The Kaggle kernel will have a pinned docker file so all the underlying dependencies for the lesson will remain fixed. There is a chance that the fastai API will change but since the lesson is mainly using high-level APIs this isn't very likely.
More generally, the risk is that Kaggle as a platform no longer exists. I think this is fairly unlikely in the short to medium term but it is a possibility. My preference for using Kaggle is that it gets people up and running for this type of work fairly quickly. Working with some type of cloud is often a prerequisite for doing deep learning work. Obviously, we don't get into it in much detail here but I think it's setting people up with the right expectations about what is going to be required if they pursue deep learning further.
For the data, is it possible to also store the data through either Github's large file storage option, or Zenodo?
Yes, these datasets are both on Zenodo ( https://doi.org/10.5281/zenodo.5838410 / https://doi.org/10.5281/zenodo.4487141).
A single Github repo that serves as a hub for these secondary resources may also be useful
We started something like that here: https://github.com/davanstrien/Programming-Historian-Computer-Vision-Lessons-submission I will add the links to the Zenodo there.
From my side, I think I have addressed all of the major reviewer comments/suggestions, and the remaining tasks are on the practical setup side. If you are happy to suggest Kaggle as a first option, I can get all of that in place and update the Git repository with relevant Zenodo links. I anticipate this only taking half a day or so; hopefully, we'd be ready to publish after that.
— Reply to this email directly, view it on GitHub https://github.com/programminghistorian/ph-submissions/issues/343#issuecomment-1062931373, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXF4ECGOF47CBUGT3BEEK3U7CS3LANCNFSM4WLFEP7A . You are receiving this because you were mentioned.Message ID: @.***>
--
Alex Wermer-Colan, PhD
Digital Scholarship Coordinator
Temple University, Scholars Studio
@davanstrien are your Zenodo datasets linked in both parts of the lesson tutorials? Just want to make sure everything is synced up
@davanstrien are your Zenodo datasets linked in both parts of the lesson tutorials? Just want to make sure everything is synced up
The datasets are hosted in Kaggle so the Kaggle notebooks load the data directly from Kaggle. For the Colab version, I will grab the data from Zenodo.
I'm thinking more in the sense that someone should be able to go through your entire lesson without having to use Kaggle if possible. Can you link the dataset in the tutorial through Zenodo so they can download the datasets and run the script locally if they prefer?
On Wed, 6 Apr 2022 at 14:38, Daniel van Strien @.***> wrote:
@davanstrien https://github.com/davanstrien are your Zenodo datasets linked in both parts of the lesson tutorials? Just want to make sure everything is synced up
The datasets are hosted in Kaggle so the Kaggle notebooks load the data directly from Kaggle. For the Colab version, I will grab the data from Zenodo.
— Reply to this email directly, view it on GitHub https://github.com/programminghistorian/ph-submissions/issues/343#issuecomment-1090603183, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXF4EB4UK6JSKG7MOL7EIDVDXK3BANCNFSM4WLFEP7A . You are receiving this because you were mentioned.Message ID: @.***>
--
Alex Wermer-Colan, PhD
Digital Scholarship Coordinator
Temple University, Scholars Studio
per discussion on #342 @nabsiddiqui and @davanstrien, both Part 1 and 2 of this lesson are now ready for copyediting @anisa-hawes
per discussion on #342 @nabsiddiqui and @davanstrien, both Part 1 and 2 of this lesson are now ready for copyediting @anisa-hawes
That's awesome! Thanks so much, @nabsiddiqui @hawc2 @cderose @mblack884, for all the work you've put into this 🤗
The lessons will make a terrific addition to Programming Historian, @davanstrien. Thank you for the opportunity to have early access to them!
Thank you, all! I am back from Leave today. I have #436 (another two-part lesson) on my desk for this week, so I will work on copyediting this lesson next week, aiming to deliver by the end of the day on Friday 29th April.
@anisa-hawes thanks! Feel free to ping me if anything needs clarifying etc.
@anisa-hawes I'm finishing up Part 1, but wondering if Part 2 is also ready to go?
@davanstrien I'm updating the yaml for part 2, need a few things from you, including an abstract and if you have a preference, a second image. You can see the yaml at the header of the lesson markdown with my recent updates: https://github.com/programminghistorian/ph-submissions/edit/gh-pages/en/drafts/originals/computer-vision-deep-learning-pt2.md
Abstract for part 2:
This is the second of a two-part lesson introducing deep learning based computer vision methods for humanities research. This lesson digs deeper into the details of training a deep learning based computer vision model. It covers some challenges one may face due to the training data used and the importance of choosing an appropriate metric for your model. It presents some methods for evaluating the performance of a model.
suggested image:
source: https://www.flickr.com/photos/britishlibrary/11162263063/
If that one works I would suggest this for the avatar_alt: A cropped illustration of a mechanical diagram of a machine with pipes.
Thanks @davanstrien, looks great. Do you want to update your bio? This is what we currently have:
- name: Daniel van Strien
team: false
bio:
en: |
Daniel van Strien is a Library Science student at City University, London.
es: |
Daniel van Strien es estudiante de Biblioteconomía en la Universidad de la Ciudad, Londres.
pt: |
Daniel van Strien é estudante de Biblioteconomia na City University, Londres.
You can also provide your ORCid if you have one
Updated bio (let me know if you want me to attempt translations too).
- name: Daniel van Strien
orcid: 0000-0003-1684-6556
team: false
bio:
en: |
Daniel van Strien is Digital Curator for the Living with Machines project at the British Library, London
es: |
Daniel van Strien es curador digital del proyecto Living with Machines en la Biblioteca Británica de Londres.
pt: |
Daniel van Strien é curador digital do projeto Living with Machines na British Library, Londres
@melvinwevers @kasparvonbeelen @kmcdono2 @tpsmi may also need to provide/update these.
yes, thanks! Please do provide translations if you can. Here's the template for your co-authors:
- name: Jim Clifford
team: false
orcid: 0000-0000-1111-1111
bio:
en: |
Jim Clifford is an assistant professor in the Department of History
at the University of Saskatchewan.
added translations
- name: Katherine McDonough
team: false
orcid: 0000-0001-7506-1025
bio:
en: |
Katherine McDonough is Senior Research Associate in History on Living with Machines and UK PI on Machines Reading Maps at The Alan Turing Institute.
es: |
Katherine McDonough es investigadora asociada sénior en historia para el proyecto Living with Machines e investigadora principal para el proyecto Machines Reading Maps en The Alan Turing Institute.
pt: |
Katherine McDonough é Pesquisadora Associada Sênior em História para o projeto Living with Machines e Investigadora Principal do projeto Machines Reading Maps no The Alan Turing Institute.
fr: |
Katherine McDonough est une chercheuse postdoctorale en histoire pour le project Living with Machines et la directrice du project Machines Reading Maps au Alan Turing Institute.
yes, thanks! Please do provide translations if you can. Here's the template for your co-authors:
- name: Jim Clifford team: false orcid: 0000-0000-1111-1111 bio: en: | Jim Clifford is an assistant professor in the Department of History at the University of Saskatchewan.
I've attempted translations for my bio.
- name: Kaspar Beelen
orcid: 0000-0001-7331-1174
team: false
bio:
en: |
Kaspar Beelen is Research Associate in Digital Humanities for the Living with Machines project.
es: |
Kaspar Beelen es investigador asociado en humanidades digitales para el proyecto Living with Machines.
pt: |
Kaspar Beelen é Pesquisador Associado em Humanidades Digitais para o projeto Living with Machines.
Here is my bio with automatic translations.
@davanstrien are we still waiting on one authors bio? I'll move forward with publication next week. In the meantime, can you post the following to both Computer Vision tickets?:
I the author|translator hereby grant a non-exclusive license to ProgHist Ltd to allow The Programming Historian English|en français|en español to publish the tutorial in this ticket (including abstract, tables, figures, data, and supplemental material) under a [CC-BY](https://creativecommons.org/licenses/by/4.0/deed.en) license.
@hawc2 we're still waiting on one more, but I know @tpsmi is on leave, so might not have seen these. I've written something for Thomas now.
name: Thomas Smits
orcid: 0000-0001-8579-824X
team: false
bio:
en: |
Thomas Smits is a postdoc in the History of Bias Project at the University of Antwerp
pt: |
Thomas Smits é pós-doutor em History of Bias Project na Universidade de Antuérpia
es: |
Thomas Smits es un postdoctorado en el Proyecto de Historia del Sesgo en la Universidad de Amberes.
I the author hereby grant a non-exclusive license to ProgHist Ltd to allow The Programming Historian English|en français|en español to publish the tutorial in this ticket (including abstract, tables, figures, data, and supplemental material) under a CC-BY license.
The Programming Historian has received the following proposal for a lesson on 'Computer Vision for the Humanities: An Introduction to Deep Learning for Image Classification, Part 2” by @davanstrien. This lesson, which is in two separate parts, is now under review. This ticket is only for Part 2 which can be read here:
http://programminghistorian.github.io/ph-submissions/en/drafts/originals/computer-vision-deep-learning-pt2
@nabsiddiqui is reviewing Part I. This review of Part II will take into consideration the feedback provided on the Issue for Part I, available here: https://github.com/programminghistorian/ph-submissions/issues/342
Please feel free to use the line numbers provided on the preview if that helps with anchoring your comments, although you can structure your review as you see fit.
I will act as editor for the review process. I will work with @nabsiddiqui, editor of Part I, to synchronize the review feedback for the two parts. My role is to solicit two reviews from the community and to manage the discussions, which should be held here on this forum. I have already read through the lesson and provided feedback, to which the author has responded.
Members of the wider community are also invited to offer constructive feedback which should post to this message thread, but they are asked to first read our Reviewer Guidelines (http://programminghistorian.org/reviewer-guidelines) and to adhere to our anti-harassment policy (below). We ask that all reviews stop after the second formal review has been submitted so that the author can focus on any revisions. I will make an announcement on this thread when that has occurred.
I will endeavor to keep the conversation open here on Github. If anyone feels the need to discuss anything privately, you are welcome to email me.
Our dedicated Ombudsperson is (Ian Milligan - http://programminghistorian.org/en/project-team). Please feel free to contact him at any time if you have concerns that you would like addressed by an impartial observer. Contacting the ombudsperson will have no impact on the outcome of any peer review.
Anti-Harassment Policy
This is a statement of the Programming Historian's principles and sets expectations for the tone and style of all correspondence between reviewers, authors, editors, and contributors to our public forums.