Open juliangruber opened 7 years ago
Anyone maintaining a GitHub repo that has data >3mb in it.
Instead of checking data into git, or using git LFS, you use dat to push and fetch it.
The workflow for fetching data could be
$ git clone ... && cd ... $ dat fetch
or with a package.json and a "preinstall" script in place, it would be
package.json
"preinstall"
$ git clone ... && cd ... $ npm install
To add a dat archive to your repo, do something like
$ echo "DATKEY data/" >> .datmodules $ dat fetch
Git sucks for managing large files, yet GitHub is a great place to host your project.
The commands and file format as described above.
Can't think of anything right now.
Who are the users?
Anyone maintaining a GitHub repo that has data >3mb in it.
How would the user interact with it?
Instead of checking data into git, or using git LFS, you use dat to push and fetch it.
The workflow for fetching data could be
or with a
package.json
and a"preinstall"
script in place, it would beTo add a dat archive to your repo, do something like
Why is the project important to the ecosystem?
Git sucks for managing large files, yet GitHub is a great place to host your project.
What defines the minimum requirements to sufficiently release (version 1)
The commands and file format as described above.
What are some stretch goals or interesting features for further releases (version 2, 3)
Can't think of anything right now.