Open cthtrifork opened 1 year ago
Documentation Search does not seem to work. If I search for “Avro” nothing appears although it is documented under “Schema registry”
Fixed!
We would like some more examples on how to deploy with GitOps to the environment, including how to setup necessary network policies.
We are waiting for the platform cluster to mature and will document it afterwards. For now the platform cluster serve as a reference architecture + consultancy+review needed with enabling team is recommended
The documentation structure is in some places unclear. Some examples:
How to work with the various technologies is mixed into the various levels in the navigation menu. For example, “Reference->OpenSearch” is on one level. Apache Flink is spread across “Reference->How to->Creating a Flink Job” and “Reference->Recommendations-Apache Flink”. Maybe it would be easier to structure consistently by technology/tools and then add detailed howto’es in sub-menus underneath. Right now, the information is spread around a bit and difficult to get to.
“Getting Started” / “References” each contain information on some of the components (e.g., “Getting StartedàCreate a processing job” vs. “How ToàCreating a new job”) of the dataplatform, but it can be unclear where to start if I as a developer need to figure out how do I now develop, build and deploy a Flink job from A to C.
Thanks for the great feedback, we will look into this
Examples and references point to aspects that are not true for the Kamstrup variant of the Data Platform making it unapplicable – how do we deal with this variance?
Example:
“Kafka topics are automatically created by the application.”
How Kafka topics must be be built to adhere to the access roles schemes.
MBJ: Think we have to re-read our documentation as this has changed over time, where topics are created automatically when developing locally, but must be created up front in the clusters.
It is unclear for the developers where/how to give direct feedback or suggestions for improvements to the documentation. How can we set this up in a way to also keep traceability (e.g., Jira)?
We have made a initial setup and lets try and add more structure over time, as it is being used by more and more people. Please reach out to the enabling team
Martin Boye: Perhaps a landing page for onboarding new developers?
The documentation seems for many places to be very high-level. Few things are explained in depth.
It is unclear for the developers where/how to give direct feedback or suggestions for improvements to the documentation. How can we set this up in a way to also keep traceability (e.g., Jira)?
The documentation structure is in some places unclear. Some examples:
How to work with the various technologies is mixed into the various levels in the navigation menu. For example, “Reference->OpenSearch” is on one level. Apache Flink is spread across “Reference->How to->Creating a Flink Job” and “Reference->Recommendations-Apache Flink”. Maybe it would be easier to structure consistently by technology/tools and then add detailed howto’es in sub-menus underneath. Right now, the information is spread around a bit and difficult to get to.
“Getting Started” / “References” each contain information on some of the components (e.g., “Getting StartedàCreate a processing job” vs. “How ToàCreating a new job”) of the dataplatform, but it can be unclear where to start if I as a developer need to figure out how do I now develop, build and deploy a Flink job from A to C.
Examples and references point to aspects that are not true for the Kamstrup variant of the Data Platform making it unapplicable – how do we deal with this variance?
Example:
“Kafka topics are automatically created by the application.”
How Kafka topics must be be built to adhere to the access roles schemes.
Documentation Search does not seem to work. If I search for “Avro” nothing appears although it is documented under “Schema registry”
We would like some more examples on how to deploy with GitOps to the environment, including how to setup necessary network policies.