m2ms / fragalysis-frontend

The React, Redux frontend built by webpack
Other
1 stars 1 forks source link

Finalise self-documentation (pdf) of zip archive #785

Open phraenquex opened 2 years ago

phraenquex commented 2 years ago

Continuing from #747

duncanpeacock commented 2 years ago

Terrific! Rearrange file, please (while we're at it): @duncanpeacock @TJGorrie

Documentation for the downloaded zipfile [Top heading]

Download details [subheading]

Download URLs [sub-subheading]

Download URL: https://fragalysis.xchem.diamond.ac.uk/viewer/react/download/tag/a7ea4b13-90b2-4040-8396-6d3fe7b111a3 Download snapshot: https://https://fragalysis.xchem.diamond.ac.uk/viewer/react/projects/1350/1010" [Ensure they render as clickable hyperlink in the pdf, and include https://]

Download options selected

The following options were checked in the download dialogue:

[use actual words in the modal - just type them across if necessary]

Download command (JSON) [currently: "Original search"]

[Add a brief description of what this means, e.g. ] JSON command sent from front-end to backend to generate the download. This can be reused programmatically as a POST command The actual JSON (as currently)

Directory structure [subheading] [note spelling mistake!]

[The text as you currently have it.]

Files included

[The list as you currently have it]

duncanpeacock commented 2 years ago

I've discussed this with Boris and I think I have a solution which will/should work.

These are the essential steps:

  1. We add a JSON string to the POST fields in the API (called document_details). This will contain the information that the front end needs to supply to the back end (essentially the contents of the Download details section above which is all really front end stuff) and will be filled in every time the file is requested (since the same zip contents can attached to more than one snapshot).
  2. The back end will now create two separate files - the zip contents without the Readme (which will be cached etc) and the Readme.pdf (which is generated fresh every time based on the document_details JSON string - even if the file is persistent).
  3. The GET call will then add the Readme.pdf to a copy of the zip file and return the combined file. You can't modify a file in an existing zip file without rewriting the whole zip, but you can easily add a file to an existing zip - hence the solution.

The complexity is in the JSON string, so it would be best if we can keep that quite simple. Any formatting of the PDF should be done by the backend, so the JSON needs to just indicate what section the data item belongs to and what it is. A structure like this should work:

{ "download_urls" : {"download_url" : "/viewer/react/download/tag/etc...", "snapshot_url" : "/viewer/react/projects/1350/1010" }, "download_options_list" : {"Selected: All structures" : "True", etc.}, "download_command_json": " .. " }

I've made the URLs relative as the back end should add the host/HTTPS when it adds the URLs to the file.

In terms of work, it is a bit fiddly as I explained before - I need to construct the Readme using the JSON and I had a few problems with formatting and the pandoc library (that does the translation) first time round. So I think we're looking at around 8 - 10 hours work including some joint testing with Boris to make sure the front and back ends are talking to each other properly before we go to staging.

duncanpeacock commented 2 years ago

This ticket is being split into two parts:

Part 1 :

Part 2 :

phraenquex commented 7 months ago

Fixed in V2. But referring to this spec from #1259