Closed markmhendrickson closed 11 years ago
I'd enjoy talking about this in #2, as well.
Definitely, let's get Ryan's thoughts on the matter.
er, @snarfed's thoughts
nice! thanks for drawing up the diagram, seems like it's coming together well.
at the straw man architecture stage, often a useful exercise is to come up with a small set of core use cases, e.g. "alice downloads her tweets to her hard drive," "bob copies his instagram pictures to flickr," "eve sets it up to automatically repost her facebook status messages to tumblr," and walk those all the way through the architecture, step by step, without hand waving anything. you often identify missing parts you need and existing parts that may be unnecessary. SWAT0 is a great example of this: http://www.w3.org/2005/Incubator/federatedsocialweb/wiki/SWAT0
we might also consider ordering those use cases by which you want to tackle first. that can let you break up the components into phase 1, 2, 3, and see which you can consolidate at the beginning and break apart later, if/when necessary. this can help us define a reasonable MVP we can build in weeks or months.
on the technical side, we'll obviously soon want to think about which parts are client vs server side, what the stack(s) look like, where we want to host and run the MVP, etc., but those can probably wait a bit.
lastly, and feel free to ignore if it's a rathole, but what's the arbiter?
also, fb, twitter, and g+ all let you download full archives (zips) of all of your data, and they each package and render them in their own file and directory structures. iirc fb and g+ include simplified HTML versions. those could be really useful for prototyping and experimenting with different use cases and UX flows, both for local hard drive download and different web UXes. definitely worth a thought, since it's an easy way to try out ideas quickly without writing much or any code.
I love the idea of imagining various user requirements within that architecture diagram. It's definitely very hand wavy right now. @markmhx is doing an initial draft of a document for this repo that explains each of these elements, those flows might be a good place for that. I will work on a few once that doc is in a PR.
The Arbiter is just a random name we assigned a thing that schedules periodic user syncs, as well as letting us administrate the syncing system. Instead of the "conductor" knowing about business problems like daily syncs, it just exposes an API that allows you to trigger syncs for users by ID. The arbiter is supposed to do the work of determining when / who should be syncing. I figured there'd be some manual intervention here at times, and for the sake of testability keeping the conductor silo'ed may be a good thing.
I'm posting a screenshot of the Gliffy diagram above in case the trial runs out and we lose it as a result.
Also, Jack is now working on further developing this infrastructural schema and writing up notes about it.
Time: 3pm CEST Participants: Mark Hendrickson, Jack Pearkes Communication: Google Hangout (https://plus.google.com/hangouts/_/f6d082231f849e25fb32accc97b75ed8dbfd903e)
We spent the time organizing our thoughts around the various services that comprise Asheville, across the three main pillars of sync, platform and apps. We used Gliffly to draft a visual diagram that demonstrates A) what core services are necessary for the system as a whole, and B) which ones communicate with each other:
http://www.gliffy.com/go/publish/4855162
We intend to publish notes in the repo that describe just how these services work together as a whole.
This exercise seemed necessary to get a birds-eye view on the project before diving into the design of any particular part, which will almost certainly start with the syncing-related services. The diagram is also still highly susceptible to feedback and modifications.