We can use two approaches to store pages in the wiki:
We can use an IPFS hash per page, with the following implications:
Each new page requires a call
Each edition on a page requires a call
Loading the list of all pages can be slow. Listening web3 events can take some time, and I wonder if getting the page names/ids from a call to a function that returns a list would be faster or even feasible for 1k entries.
We can use an IPFS hash for all the wiki, with also some implications:
That would save gas when creating/editing multiple pages at once
We wouldn't be able to set permissions (to "protect" pages) on a per-page basis
It would be much more difficult to "understand" what is being edited from the solidity standpoint.
This is an untested approach, and may come with unforeseen problems.
For large wikis such Wikipedia (with 5,994,000+ articles only in the English edition) the last approach would we the only one feasible. We would store just one IPFS hash that refers to all Wikipedia content as it is done in https://blog.ipfs.io/24-uncensorable-wikipedia/.
We can use two approaches to store pages in the wiki:
For large wikis such Wikipedia (with 5,994,000+ articles only in the English edition) the last approach would we the only one feasible. We would store just one IPFS hash that refers to all Wikipedia content as it is done in https://blog.ipfs.io/24-uncensorable-wikipedia/.