A black-box function on the datasource is providing a list of possible tables it can create with some meta information per column
These are passed to all visualizations - a black box function per visualization is checking whether the current visualization is able to "consume" a given table by returning a state object for this configuration plus a score
The score is used to rank the potential matches and the best one is picked
This approach gives a lot of flexibility, no assumptions about what the datasource or the visualization can do is hardcoded - the only interface is the table data structure.
However, due to that a lot of responsibility is shifted to the suggestion black box functions for visualizations and datasources:
All columns need to be "consumed", the suggestion logic needs to make sure there is no stale reference left - otherwise this can lead to weird bugs for existing saved objects
It's very hard to implement a change in behavior across all chart types because it needs to be implemented in the suggestion black box of each visualization separately
The separation of concerns (just a table shape, nothing else) makes it easy to accidentally implement suggestions that are weird to the user during chart switches
Target state
To address the pain points from above, the following general architecture would make more sense:
Instead of operating on possible tables and how they map to visualization states, refactor the suggestion logic to operate on possible columns of a data table and visualization dimensions to map them to.
The new flow would look like this:
Datasource states which columns it can offer
Visualizations state which columns they can "consume" and which dimensions they have to fill (some mandatory, some optional)
A pattern matching engine (part of the frame) matches and scores both of them with a single algorithm
Datasource is informed about which columns to produce and which columns to drop
Visualization is information about which columns to map
Advantages
The "pattern matching" part is already happening in an indirect way as a part of the visualization suggestion functions - this change unifies it in a single place and makes it explicit which will make it more robust and easier to change in a consistent fashion
Much harder to "forget" about unmapped columns in the table because suggestion code needs to be written in an explicit way moddeled around dimensions
Editor frame is aware of dropped/added columns meta information which can be used for other features (e.g. showing this information to a user before making a change)
Navigation from vis to vis is always consistent based on a single set of rules instead of having different rules for each type of vis
Disadvantages
Assumes columns are independent of each other and each column can be dropped at any point - however this is not necessarily true in all cases
Some changes might be harder in the future because "ad-hoc" changes bolted on for a special case are much harder to express.
Current state
Right now, suggestions are working this way:
This approach gives a lot of flexibility, no assumptions about what the datasource or the visualization can do is hardcoded - the only interface is the table data structure.
However, due to that a lot of responsibility is shifted to the suggestion black box functions for visualizations and datasources:
Target state
To address the pain points from above, the following general architecture would make more sense:
Instead of operating on possible tables and how they map to visualization states, refactor the suggestion logic to operate on possible columns of a data table and visualization dimensions to map them to.
The new flow would look like this:
Advantages
Disadvantages