[!WARNING]
This is a prototype application. Development is still working mitigating risks of using generative AI. Our current target users are Sage staff or other data professionals, not (yet) general Synapse users.
Research communities supported by dedicated data managers receive the benefit of having data packaged and disseminated optimally.
Data managers themselves could benefit from tooling to facilitate their important and hard work of curating data, developing the data model, and facilitating data sharing in general.
And like with other knowledge work, including AI could greatly boost productivity, though it is perhaps best achieved through an internal or "wrapper" interface that mitigate pitfalls^1.
Developers can also help with figuring out where AI can be inserted into workflows and how to design technology for doing that.
This is the proof-of-concept for such an application.
The design considered the different responsibilities of a data manager and if/how each can be prioritized for an assisted workflow. There are many responsibilities^2, but the general list can be refined and ranked based on the work at Sage:
To be clear, the Assisted Curation/Content ENhancement Tool is a proof-of-concept CLI tool only focuses on helping with the first two responsibilities. In its first iteration, ACCENT narrows down the scope of the curation assistance even further, to dataset curation for the NF-OSI use case. The idea is to work out the "wrapper" interface into a usable and productive workflow first.
To create a more useful and "responsible" wrapper interface (in several senses of the word "responsible"), the app builds structure around the responsibilities described above that match the org's current workflows. Unlike interacting with the LLM in the default interface, this is basically interaction with additional infra and guardrails that specific prompt templates and tools use access.
The app integrates two providers, Anthropic and OpenAI. In the same conversation, it is possible to switch between models from the same provider, though not between different providers, e.g. switching from ChatGPT-3.5 to ChatGPT-4o is fine, but not from ChatGTP-3.5 to Claude Sonnet-3.5. However, just because the switching feature exists does not mean it is expected for the user to try manually try switching too much between models for different tasks. For both providers, the default is to use a model on the smarter end. Trying to reduce costs by switching to a cheaper model for some tasks is likely premature optimization at this early stage.
OPENAI_API_KEY
in env/config.ANTHROPIC_API_KEY
in env/config.Here is how Responsibilities map to structured modes:
Planned functionality have been scoped/mapped as below for specific versions. (This roadmap does change with feedback and outside suggestions.)
curate_dataset
function callask_database
function callvisualize
function call for data modelvisualize
function call for datasetNothing more is planned until after the Evaluation (below).
There are ideas for other helper workflows and functionality, but these are dependent on first round of the proof-of-concept feedback, in case this is not the right approach/the design needs to change significantly. To inform whether this actually benefits data management work, we need to to evaluate the proof-of-concept in several ways. We would have to ask a user, "How would you compare using this versus trying to accomplish the same work goal using a different workflow that":
There is also workflow-specific research needed. To be continued...