Closed cthtrifork closed 1 year ago
We had to look at the code to understand the generic and specific Avro builders.
DAT: Generic Avro builder is only part of the communication with OpenSearch now, and the specific Avro builder has been changes so it follows the same syntax as the other builders.
Generally, builders are not explained in depth. It is not possible to read from documentation how to use them.
We do not understand why the connectors, e.g. the Kafka source is wrapped. It makes it not possible to use the documentation of the wrapped connectors and sets even higher requirements to the Cheeta documentation. Why have the existing connectors that you build on not been just extended?
Also, this leads to limited descriptions in the mouse-over tool tips in the development IDEs
MNR: I generally agree with this sentiment. The way we create our jobs now is very specific to the “Cheetah”-flavor of Flink, this will cause developers to learn our version of how to work with Flink, rather than giving them the general knowledge of how to work with Flink in any environment. It should be possible to “add” our functionality to the default connectors, rather than wrapping the default connectors entirely, hiding access to the inner workings of Flink.
Martin Boye: https://docs.cheetah.trifork.dev/jobs/cheetah-app-jobs/index.html
Describe what is the purpose of this page. Also explain navigation with articles.
Generally, builders are not explained in depth. It is not possible to read from documentation how to use them.
There is now more documentation at https://docs.cheetah.trifork.dev/libraries/cheetah-lib-processing/README.html as well as in the code through Java doc. The documentation is still evolving and more feedback is welcome
We do not understand why the connectors, e.g. the Kafka source is wrapped. It makes it not possible to use the documentation of the wrapped connectors and sets even higher requirements to the Cheeta documentation. Why have the existing connectors that you build on not been just extended?
Fixed! We now provide factories for the source and sink and allow developers to play with the Flink connector as much as they want. See more about the changes at GitHub releases and read more about the usage in our docs
Also, this leads to limited descriptions in the mouse-over tool tips in the development IDEs
Fixed! See the two paragraphs above
We had to look at the code to understand the generic and specific Avro builders.
Generic Avro builder is gone and specific Avro builder is better documented. See our documentation
Generally, we miss some more explained code examples for how to use.
There is now more documentation on our docs site
Generally, builders are not explained in depth. It is not possible to read from documentation how to use them.
We do not understand why the connectors, e.g. the Kafka source is wrapped. It makes it not possible to use the documentation of the wrapped connectors and sets even higher requirements to the Cheeta documentation. Why have the existing connectors that you build on not been just extended?
Also, this leads to limited descriptions in the mouse-over tool tips in the development IDEs
We had to look at the code to understand the generic and specific Avro builders.
Generally, we miss some more explained code examples for how to use.
The description of how to setup a gitops flink deployment is limited.
E.g. what does the individual configurations mean? E.g. How do we configure for more parallelism…
We are missing information on the interfaces and how to use them (I.e. in Jobs->cheetah-lib-processing)
There is very little information on the interfaces and we are missing explained examples.