Open kevinstonetg opened 2 years ago
Add documentation for how a production deployment would work at scale for training and inference. E.g. Optum today asked how GNNs would be leveraged with their existing 8-node TG cluster.
How does our deployment architecture support ML Ops (e.g. the ML Maturity Ops Model (L0 - L4) developed by MSFT Azure)?
Add documentation for how a production deployment would work at scale for training and inference. E.g. Optum today asked how GNNs would be leveraged with their existing 8-node TG cluster.