MIMSoftware / haste-server

Customizations made to seejohnrun/haste-server
7 stars 11 forks source link

Update redis key/value storage #34

Closed neandrake closed 10 years ago

neandrake commented 10 years ago

Store the document in multiple parts using split key names:

"data.{key}" = "{base64 encoded gzipped file}"
"info.{key}" = "{stringified json object containing metadata about the file}"

This prevents having to store the document inside a stringified json object. In the future this would likely make streaming large objects directly to redis easier to implement. This also allows for potentially splitting really large files into multiple entries, a la

"data.0.{key}" = "..."
"data.1.{key}" = "..."

The downside is this means when requesting a file, two database entries have to be queried/pulled, however the keys used are predictable so they could potentially be pulled in a single query (assuming redis supports this).

When querying/storing a file remember to use transactions, even though node is single-threaded.

As part of this, we should also update the current gzipping process to only gzip if the file size is > some threshold. The zip + b64 inflates documents of really small size (probably only restrict non-gzipping to plaintext).

http://stackoverflow.com/questions/7844001/what-is-the-lower-threshold-to-not-perform-http-compression

From that it sounds like we should use deflate/inflate instead of gzip/gunzip for text files (I've not actually looked into the difference yet...), and probably set some threshold at around 256 bytes.