Open crspybits opened 7 years ago
My concern with this is security. Here's the security hole I envision: 1) A jail broken client is able to get the access id's being used for client side access to Google and Facebook. This is pretty much all that's needed (currently at least) to rebuild something like the SharedImages iOS app and have it access the right backend. (Of course, you need the backend URL but that's not really secret). 2) Since it's open source, the SharedImages app can be modified to spill out auth information coming back from the server. 3) Rebuilt SharedImages app gets sent auth information for someone else's cloud storage (e.g., Dany client gets sent Chris' Google Drive auth)-- this auth info gives client full access to that cloud storage. 4) Now the hacker Dany has full access to Chris' Google Drive.
I'd like to explore a POC to understand the dynamics of the URLs. I am supposing that some URLs allow for the addition of files, and other URLs allow only viewing the file. What I believe would be distributed to other clients would be the "view only" url (via the json db). As far as uploading the file in the first place, could some cooperation with the built-in client (gdrive, dropbox, etc.) so that the uploading is done by that client and then a distributable url would be built to share (controlled by a lambda function). Let me draw it out (can we paste pictures in this thread?)
I'm happy to talk about alternative architectures. Especially one's that remove the centralized nature of the server. It's a bottleneck. By URL's above, I'm believing you mean server endpoints. The endpoints are here: https://github.com/crspybits/SyncServer-Shared/blob/master/Sources/SyncServerShared/ServerConstants.swift#L78
The other main reason I decided to go for a server solution (aside from security) is synchronization. If, say, two people are accessing the same files in cloud storage, how do you synchronize access in some meaningful manner? What I'm using is an optimistic synchronization strategy. It goes like this:
So, maybe my use case is limited to the constraints of image sharing. The locks are less likely to be necessary based on these assumptions:
Images owned(1) by a single user, better yet, owned by a single cloud storage account. A user may own more than one (dropbox, gdrive, onedrive, etc)
only the owner can upload an image. Does an image really change? It can be replaced.
only operations needed to be supported are delete/insert
once an image is inserted, a URI is made for it. This URI can be assigned a GUID (shorter) locally by the user
A database (db1)(2) can be generated by an album(3) owner. This db holds all image information reuirements (mapping of URI and GUI).
GUIs could be useful for caching (shredimages probably alredy has a strategy for this, so I'll stop there)
The db file can only be written by the account owner to the account owners account. Therefore a version of db is created by each user in their own account.
The master db is owned and maintained by the album owner. (yes, 1 album = 1 master db).
All other copies of db for the same album are slave dbs.
The slave dbs hold any updates of entries (such as inserts, metadata updates).
Any one client must read all db copies in order to construct the current version of the db.
In theory the number of rows in slave dbs should be tiny relative the master db.
When the owner of a master reconstructs the album db, all pending update from slave db are committed.
When non-owners reconstruct the album db, any committed changes are removed from their slave dbs (remember only the owner of a slave db can edit that particular db).
Furhter assumptions include the probability that albums would not grow to 1000 images or more, as what is the ultimate purpose of such an album. My assumptions is albums should be small (<1000) sets of picture to be share by a group of individuals with some sort of common theme. (ie. check out my april cats, or check out my brown cats)
some roadmap considerations: (1) an image owner may wish to transfer ownership. This is where a lambda function would assist in automating this task (2) A JSON document that can be manipulated by CouchDB, etc. (3) Albums would be initiated by an individual and invitations through email or other means would find their way to the authorized users. The authorized users would track in a private folder in their cloud storage these invitations. They would exist as album entities once invitation is redeemed. Redeeming an invitation would be allowed only once per token.
As I have time I can draw a diagram with all the components if I should pursue this further. Where I am going with this is to use db low level operations that already exist for distributed databases.
We may still need locking mechanisms for a user using their client on multiple machines, but I am hoping to bank on the premise that the probability that a single user would use their multiple clients concurrently would be low.
Might be good to have a spoken conversation about this. There's lots there! What about the situation we've been talking about with discussion threads for images? I think what you are talking about is static, single versioned files. With discussion threads, a file containing the discussion needs to be changed by multiple users.
Consider this project,but replacing the s3 object with a cloud storage file(s).
https://github.com/cloudnative/lambda-chat
The chat is not really live,but more reactions to comments. With our without nesting,I don't know.
On Sun, Aug 6, 2017 at 2:24 AM Christopher Prince notifications@github.com wrote:
Might be good to have a spoken conversation about this. There's lots there! What about the situation we've been talking about with discussion threads for images? I think what you are talking about is static, single versioned files. With discussion threads, a file containing the discussion needs to be changed by multiple users.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/crspybits/SyncServerII/issues/11#issuecomment-320488337, or mute the thread https://github.com/notifications/unsubscribe-auth/AJEDwts4Ij0sGDN0j1sFO0fg-J-kqKGCks5sVVwcgaJpZM4Os7aG .
Here is an example of the type of URL that I would store in the "shared folder" json paramter:
I do not know how long this URL would be valid for, so this is a test. I think with a security token, this URL could be refreshed as needed.
On Sun, Aug 6, 2017 at 11:59 AM, Dan Ligas dany@acm.org wrote:
Consider this project,but replacing the s3 object with a cloud storage file(s).
https://github.com/cloudnative/lambda-chat
The chat is not really live,but more reactions to comments. With our without nesting,I don't know.
On Sun, Aug 6, 2017 at 2:24 AM Christopher Prince < notifications@github.com> wrote:
Might be good to have a spoken conversation about this. There's lots there! What about the situation we've been talking about with discussion threads for images? I think what you are talking about is static, single versioned files. With discussion threads, a file containing the discussion needs to be changed by multiple users.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/crspybits/SyncServerII/issues/11#issuecomment-320488337, or mute the thread https://github.com/notifications/unsubscribe-auth/AJEDwts4Ij0sGDN0j1sFO0fg-J-kqKGCks5sVVwcgaJpZM4Os7aG .
How about storing the access token in the server and only after authentication would the server release the auth token to the client. The client would hold the token in memory and never store it.
In general, I think anything coming back to the client can be accessed by a hacker. Once the auth token is on the client (having gotten through HTTPS/SSL), is on the client. I think the client can really do what it wants. Given alterations of code, e.g., made all the easier because this project is open source or poking around with tools at run time on jail broken devices. In general, as we well know, client's can't be trusted.
Apple has some interesting techniques for doing In-App Purchases, and making them secure. These involve encrypted objects that are on the client. However, these encrypted objects are passed to Apple servers. And the Apple server can decrypt them and ensure the client is not misbehaving. In the case of our cloud storage, if we are trying to have communication go directly from the client to someone else's cloud storage, we would have to send a plain-text auth token to the client, and then have the client use that plain-text auth token to communicate with cloud storage. What's missing here is a means to have the cloud storage receive some kind of encrypted object containing the auth token (the encrypted object could be signed by our server). Maybe we ought to lobby the cloud storage vendors for this.
Yes, I have been approaching this incorrectly. Authorization needs to be negotiated to each cloud storage individually, holding the access token in memory only (not reusing it). Basically mimic what Gmail or your bank app does. The authorization is built in to the client os. (the appropriate account already exist, just like you currently have the user login to shared images)
And there's one more step here-- just so we're on the same page. In the SharedImages app, you don't have direct access to my Google Drive, which is what the SyncServer gives you. SyncServer is storing my auth tokens for Google Drive and allowing you access to that. From your iPad, you cannot sign in directly to my Google Drive.
Yes, understood. We are exploring in #11 a more distributed way of storing the images, such that each contributor a "folder" or "album" stores their images on their own cloud storage. What is shared in a managed fashion are the read-only links. Access by a client needs authorization to put images on the client's cloud storage(s), along with updating (re-uploading) metadata, which would include a shareable link to all shared pictures for that "album".
I still see a role at the SyncServer level that manages invitations and hands out metadata URLs.
On Thu, Aug 10, 2017 at 1:14 PM, Christopher Prince < notifications@github.com> wrote:
And there's one more step here-- just so we're on the same page. In the SharedImages app, you don't have direct access to my Google Drive, which is what the SyncServer gives you. SyncServer is storing my auth tokens for Google Drive and allowing you access to that. From your iPad, you cannot sign in directly to my Google Drive.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/crspybits/SyncServerII/issues/11#issuecomment-321615830, or mute the thread https://github.com/notifications/unsubscribe-auth/AJEDwlW2Q-s4mpVgsqfWG0ythrGtxYpsks5sWzp0gaJpZM4Os7aG .
I read this about google access tokens recently (not in the manual, so needs verification):
Access tokens typically expire after 60 minutes. If you have a refresh token you can use the refresh token to get a new (valid) access token. This doc explains how to do that: https://developers.google.com/accounts/docs/OAuth2WebServer#refresh
Now, what we need is a restricted privilege on this access token, ie. append only a certain file. This may be the feature to request of the storage cloud providers.
Have you considered the upload going directly from client to shared storage (like Google drive or Dropbox)? Your invitation would grant you access to an auth file released by a lambda function. With this Auth file, the inventory of files would be revealed to the client. The auth file would be stored in the creator's file space (gdrive, Xbox, etc) of course you'd be limited to cloud storage that supported país).