Open brianok77 opened 4 years ago
After speaking with @toadzky, we should instead load the orgs, sites, and qr codes directly from the simulation data by creating flat files and using the Neptune Bulk Load API (https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load.html), and then use the api to create users and scans from the files. This is because using the API for orgs and sites will generate QR codes sent via email, etc. instead of providing the necessary data in the API payload. I'll update the story details when I have time to dig into exactly what this means for things like the graph structure and how we can create the GUIDs.
@toadzky is currently rearchitecting the backend to store data in DynamoDb and then push data into Neptune from DynamoDb using Streams. Therefore we will load data directly into DynamoDb instead of Neptune as mentioned in the last comment. @toadzsky will provide the DynamoDb schema ASAP so we can start programming this.
The parsing of the JSON file, data transformation, and mock upload is done in a local branch. Starting discovery of the best way to upload this data with notation that it is sourced from simulation data (so we can easily clear the data if we want to run a new sim with "better" parameters).
We need a tool that will read the JSON files extracted in #72 and make the proper zerobase smart-tracing-api calls to populate the staging database with the data from the experiment. Important: This tool will need to maintain its own series of maps from "source id" to "zerobase id" for sites and devices, so when loading the scans and test result data it can be associated with the correct sites/devices.