dat-ecosystem-archive / datproject-discussions

a repo for discussions and other non-code organizing stuff [ DEPRECATED - More info on active projects and modules at https://dat-ecosystem.org/ ]
65 stars 6 forks source link

Live community hangout #5 - Sept 21st, 10AM PST #26

Closed okdistribute closed 8 years ago

okdistribute commented 8 years ago

We will have our fifth community call next Monday using Hangouts on Air! The entire community is encouraged to join and listen in, ask questions, and see where we're going next with dat.

Youtube Link: http://youtu.be/rj71iaugcXA Gitter link (chat/ask questions): https://gitter.im/datproject/discussions

Theme: On api simplification and containerization Where: Google Hangouts on Air Date: Monday, September 21st. Time: 1pm EST (17:00 GMT)

Leave your questions in the thread below, in our gitter web chatroom or #dat on freenode

Agenda:

We'll be using Hangouts On Air.

Note that only Dat team members and others on the agenda will be invited to join the Hangout so they can broadcast voice and video. Hangouts only supports about 10 people max, but the youtube stream can support an unlimited amount of viewers.

max-mapper commented 8 years ago

some questions related to https://github.com/maxogden/dat/pull/403 and CLI API issue above

other random notes for call (disregard unless you're me, doesnt make much sense)


[ ]
[third floor: https://github.com/mafintosh/hyperfs] (filesystems on top of graphs)
[second floor: https://github.com/mafintosh/hyperlog] (graphs on top of logs)
[first floor: https://github.com/mafintosh/level-logs] (append only logs)
[ground floor - node.js + leveldb]

in hypercore: edit file -> mac os -> hyperfs -> new edit to existing file
  e.g.: 'echo "foo" >> bar' -> only be a diff on top of existing bar
in mac os: edit file in dat -> replaces existing file
  e.g.: 'dat put foo.jpg' -> completely replaces existing foo.jpg

dat clone
ls

image.zip                           1GB

{
  "image.zip": [
    "2039j23j402",
    "j2034j2093jj",
    "23j420394j20"
  ]
}

dat add image.zip
-> 1mb "2394u02893j" >> image.zip chunk list

dat sync
^ 1mb

=======

- download all metadata in graph
- download set of files at version, only latest copy of each file, into users working directory as correct names ready to use (no previous file version data)
- download all old versions into .dat folder
- just update graph to see if new metadata is available, dont touch working directory

dat clone foobar.com
dat checkout
dat sync
dat fetch

notes from call

dat init
dat commit "hi"
dat push foo.com

- client pushes everything
- server gets everything
- client only stores metadata afterwards

dat init
dat snapshot "hi" # commits + backs up files into .dat
dat push foo.com

- client pushes everything
- server gets everything
- client has everything
okdistribute commented 8 years ago

I'd love to start with 'what relevant constraints are imposed/new features unlocked when we integrate hyperfs with dat.' If there's a more elegant way to think about this -- especially if we can get deduplication to work within bytes of blobs -- I'm all ears.

max-mapper commented 8 years ago

@cshum oh sorry I just realized we scheduled it for 1AM your time again. maybe the next one we can do 8AM your time/5PM our time

sallespro commented 8 years ago

hi, i did catch a bit of the conversation today, but missed the hyperfs / containers big picture. in order to enable the code joining the data keeping file archives as primitives, what does it look like to play with dat containers ?

i had to take the way of using https://github.com/zchee/docker-machine-hypercore but did't really understand if we're working this way after all.

okdistribute commented 8 years ago

@sallespro we mostly punted on coming up with a solid commandline interface for dat containers right now. Towards the end we start getting into the weeds, if you want to take a ganter. :) What's your use case with docker-machine-hypercore?