Open titaniumbones opened 7 years ago
agree! This tutorial merely illustrated the current possible workflow, not where the project is headed (for obvious reasons).
Perhaps we should break out the issue with the default IPFS ports being locked down on many guest networks into an actual issue?
--
+1 336-269-1539 / @lizbarry http://twitter.com/lizbarry / lizbarry.net
On Wed, Aug 30, 2017 at 12:29 PM, Matt Price notifications@github.com wrote:
The "Replicate a dataset" tutorial presents an essential part of the DT platform -- a mechanism that allows an individual or entity to assume direct responsibility fr the health of a dataset or collection.
This is a conceptually important and without it we can't give a complete account of the DT vision. However, the current implementation is difficult to work with, for at least the following reasons:
- it requires command-line knowledge, something especially rare among Windows users
- the IPFS install (again, especially on Windows) can be finicky; for this reason it's not a very good introduction to command-line work. While an introduction to the command line can be powerful (cf. Software Carpentry), we are not setting up beginners for success, and their experience may actually lead them to AVOID future contact with CLI
- guest networks often lock down the IPFS default ports, so the demo may not even work for most people!
- in future versions of DT, the CLI will not be necessary, as @b5 https://github.com/b5 is building an electron app that will run the IPFS daemon in the background
- most end users probably don't care about IPFS per se, even if they're interested in learning about distributed data curation, and want to contribute somehow
Proposal: let's keep this tutorial around but only break it out when we're talking to people who are directly concerned with computing infrastructure. This means people like sysadmins, data managers, and maybe digital project archivists & librarians. This audience can really benefit from a more technical introduction.
Meanwhile, for other audiences, let's craft a new tutorial as soon as the app-internal ipfs node is implemented. We can walk through similar tasks and invite participants to start participants to start contributing to the distributed web via DT, and point the enthusiastic to the command-line version for an in-depth look.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/datatogether/learning/issues/24, or mute the thread https://github.com/notifications/unsubscribe-auth/AAJ2n0uoyQCzuuGR4jN-CL2aEYCA7MRlks5sdY30gaJpZM4PHpoP .
Good idea @ebarry . See #26 but leaving this one open for now unless you think it should be closed.
The "Replicate a dataset" tutorial presents an essential part of the DT platform -- a mechanism that allows an individual or entity to assume direct responsibility fr the health of a dataset or collection.
This is a conceptually important and without it we can't give a complete account of the DT vision. However, the current implementation is difficult to work with, for at least the following reasons:
Proposal: let's keep this tutorial around but only break it out when we're talking to people who are directly concerned with computing infrastructure. This means people like sysadmins, data managers, and maybe digital project archivists & librarians. This audience can really benefit from a more technical introduction.
Meanwhile, for other audiences, let's craft a new tutorial as soon as the app-internal ipfs node is implemented. We can walk through similar tasks and invite participants to start participants to start contributing to the distributed web via DT, and point the enthusiastic to the command-line version for an in-depth look.