Closed rsynnest closed 7 years ago
thanks for writing
The current implementation of IPFS can only distribute static content. In order for IPFS to build a fully "distributed web", with websites similar to the ones we use today, it will need to handle distributing dynamic content, including private user-specific data like passwords or account information.
This is not true. IPFS works jut fine for dynamic content. IPNS is there for that reason. please look into it.
encrypted objects ...
yes, this is already planned, much like you suggest. see https://gateway.ipfs.io/ipfs/QmXAQdSkFbJHCPHDaALSxeqtT2fFFV3U1FxeSvDmXVFgUD/ipfs-draft3.pdf
nice diagrams! :) :+1:
See also IPNS faq. https://github.com/ipfs/faq/issues/16
Thanks for the response! I've been told by others that dynamic websites are solved by IPNS, but I still don't understand how mutable namespace provide a way to safely share private data. I'll ask for clarification on that in FAQ https://github.com/ipfs/faq/issues/16.
I suppose the issue here is one of application architecture and not of the IPFS protocol itself. I might try to build a "vanilla" (meaning no Ethereum or the like) dynamic web app using IPNS to see how that would work. I think it would be a great proof of concept for people like me who can't seem to conceptualize it. Will be sure to post a follow up here if I end up with anything useful.
please be warned IPNS is still being worked on and may not have perfect resolution.
i didnt want to start a new issue in case this was related so i hope i can ask here.
the way im understanding ipfs is that if someone deploys an application, e.g. a web application in javascript, when others access it, they basically make a copy of the application on their own side and then run it?
its just a huge filesystem so i cant for example just put my current php based website on to ipfs and expect it to work because it requires a php server. and since it includes account pages and authorization that definitely wont work (unless it was an encrypted file based database which could be shared?).
but it seems like everything will be possible and things are in the works to make these cases easier?
right now for an inexperienced user like me, all im probably capable of is hosting static websites or javascript web apps (etc.) on ipfs right?
@berrythesoftwarecodeprogrammar sounds about right!
To clarify, and see if I understand: IPFS supports the distribution of dynamic content, in that an authority can generate or change the content associated with a name and IPFS/IPNS will distribute that novelty to those who request it. But IPFS does not provide a means for generating dynamic content. That's up to the application using IPFS. It also doesn't provide a direct means for feedback in the way that HTTP requests do. Is that right?
That is, while IPFS could be used to distribute a Twitter-like timeline, allowing the consumer of that timeline to favorite (or like, or whatever the kids are doing these days) a status in that timeline would not be so simple. The first thing that comes to mind is that the user clicking the button would sign a fact stating that they have favorited the status, and then make that fact available over IPFS. However, that leaves a signaling problem: other users of the service don't know to look for that information on the network.
Is there a solution in mind for the signaling problem in that setup? Or is there a better solution in mind for the entire use case? Or is this still an open question?
It also doesn't provide a direct means for feedback in the way that HTTP requests do. Is that right?
@Peeja can you explain what you mean there? what is HTTP doing here that is different from just moving documents?
The first thing that comes to mind is that the user clicking the button would sign a fact stating that they have favorited the status, and then make that fact available over IPFS. However, that leaves a signaling problem: other users of the service don't know to look for that information on the network.
Is there a solution in mind for the signaling problem in that setup? Or is there a better solution in mind for the entire use case? Or is this still an open question?
Yeah, we'll be handling this with pub/sueb, with CRDTs, and other aggregation strategies.
can you explain what you mean there? what is HTTP doing here that is different from just moving documents?
In HTTP, there is a request; in IPFS, there isn't. HTTP is a "pull" technology; IPFS is…well, it's not pulling and it's not pushing. It's…rendezvousing?
In HTTP, we often have to create an entity for a particular user. The entity is specific to that user—a timeline of statuses of other users they follow, for instance—and we know which user to build the entity for because of (usually) a cookie header in the request. The user-specific entity is created on demand.
In the IPFS model (naively, at least), the application would have to generate these entities in advance and make them available to the network, presumably encrypted with the public keys of the users for which they're intended.
HTTP offers an API of function calls to its clients. IPFS offers a data structure. Fetching data from the IPFS network can't trigger the process that will create that data; the data must already exist.
You could think of HTTP headers as metadata of the documents being passed. IPFS has the same or similar metadata, I imagine cookies could easily be implemented using user's public key, ip address, or other identifiers, or implemented at the application level (unsure if possible given current IPFS API).
I think a lot of these questions, including ones I've asked, are about how an application can be self sufficient and operate correctly in a decentralized/distributed web. This isn't necessarily an IPFS specific issue. I think application architecture will have to grow out of the distributed nature of IPFS, much like how the modern web was designed around HTTP. There is also a feedback loop where features/enhancements are added to the protocol to accommodate needs of applications, which happened with HTTP and will happen with IPFS. For now I think it's in the hands of developers to design applications that take advantage of this distribution protocol, and see if there is a real need for a protocol update, or if there is a way to design the application to fit the protocol. Regarding app development, tools like blockchains (http://tendermint.com/), smart-contracts (http://www.erights.org/, https://erisindustries.com/), distributed filesystems (https://www.tahoe-lafs.org/trac/tahoe-lafs), and others can be used to build dynamic distributed apps. I have no clue how to do this personally, but it seems to be the proper way to build a distributed dynamic app for IPFS.
Let's take a different approach, then. Let's use Google.
When I submit a request for a Google search for "bananas", I get back a page of results. Assuming I send no identifying information and the indexers is paused, I can send the same request several times and get the same response, and anyone else can do the same. The response is cacheable. It can be distributed over IPFS and given a name for IPNS which corresponds to the query, "bananas".
Thus, anyone can now look up the name that goes with the query "bananas" and fetch the results over IPFS.
But can they also search for "croutons" this way? Or "pumpkin pie"? If so, Google must have conducted every search possible in advance and published the results. That's not feasible.
In HTTP, even requests which are idempotent and return unchanging entities may still entail work. Those applications don't translate to IPFS (or what I understand IPFS to currently include) without some kind of additional signaling system to tell the application to do that work and publish the results.
In the Twitter example, the solution is probably to move the responsibility for aggregating several users' timelines to the client. In the Google example, the equivalent would amount to the client fetching the entire Web and indexing it. Smart contracts may be appropriate for applications somewhere between the two examples, but again, smart contracts aren't appropriate for implementing a search engine.
But as I understand from @jbenet's comment, there are prospective plans to include one or more forms of signaling in IPFS itself. Is that right?
But as I understand from @jbenet's comment, there are prospective plans to include one or more forms of signaling in IPFS itself. Is that right?
yes. and nothing is preventing you from writing APIs and programs on top that add content to ipfs and provide references directly (i.e. hashes). invoking a search program (centralized or distributed, preemptively or on demand) still has to generate output, and that output can be served over ipfs. (Thinking of search results, many times, the order is the only new part, all the other information is often the same.)
To throw a spanner in the works, and stepping back one or two levels - there's nothing stopping you from including an entire database or virtual machines into the file system. For the php example I guess it's the inclusion of the web server inside the ip file system and necessary scripts to fire up to get this working on any local target machine.
If need be - the dropbox team approached this distributed database problem and came up with the database api / sdk - it was not widely adopted and they ended up open sourcing the javascript sdk which handled resolution conflicts.
https://github.com/dropbox/datastore-js
https://blogs.dropbox.com/developers/2013/07/the-datastore-api-a-new-way-to-store-and-sync-app-data/
Problem
The current implementation of IPFS is geared heavily toward distributing static content. In order for IPFS to build a fully "distributed web", with websites similar to the ones we use today, it will need to handle distributing dynamic content, including private user-specific data like passwords or account information.
Using IPFS there is currently no way to distribute private server-side content (ie: a password database) to a bittorrent-esque swarm without giving seeders access to that sensitive information. One solution is to keep all this private server-side data on a single host server, but then you defeat the purpose of a distributed web (clients have to query the centralized host server for their private data, and there is a single point of failure for the entire swarm).Proposed Solutions
The image and description below was my original rough concept of how to solve the problem: As a simple example, imagine the "secure data object" in the diagram is a user's encrypted password. In this case the user only needs a public key and a private key to access their private data safely from any seeder.
I imagine this method would be best if it was part of the IPFS protocol, that way each IPFS user would only need one public and one private key, and all IPFS servers could use one set of keys to encrypt private data on a peer-by-peer basis.
Disclaimer: I am not very knowledgeable about cryptology or security, and I'm sure there are flaws as far as overhead/design. If anyone has any improvements or suggestions on how this could be done using "vanilla" IPFS, please comment!