When we change/replace a catalog "1", the cached bindings in the dashboard-api service can be stale, representing state and model of the older catalog at time it was first contacted. We addressed this in practice by restarting the web service after publishing the new release data.
It might be better if a last-ditch exception handler could catch these errors and recover. The cache for the current catalog (or all, if easier?) should be purged, and the request restarted to get a fresh catalog binding. It might be easiest to purge cache and throw an error like 503 temporarily unavailable, and patch the client web UI pages to retry on this error if they don't already...
When we change/replace a catalog "1", the cached bindings in the dashboard-api service can be stale, representing state and model of the older catalog at time it was first contacted. We addressed this in practice by restarting the web service after publishing the new release data.
It might be better if a last-ditch exception handler could catch these errors and recover. The cache for the current catalog (or all, if easier?) should be purged, and the request restarted to get a fresh catalog binding. It might be easiest to purge cache and throw an error like 503 temporarily unavailable, and patch the client web UI pages to retry on this error if they don't already...