Maybe we can publish the model with some real-world sample SQL programs, then users can copy a program and use it in a new case with tiny modifications.
For sharing the guide of building pipelines for data pre-processing as well as model training and prediction, using a Jupyter Notebook is quite straight forward.
For sharing trained model weights to other users so that they can do predicting, fine-tunning, the current design introduces a "model zoo table", which will record the detailed information of every trained model for later sharing. This is currently still developing.
Some discussions about the model zoo: