Closed davyodom closed 3 years ago
Is it feasible to create another build/test pipeline (eg based on docker), removing the need of using ko
to build/test/deploy this particular channel?
cc @chaodaiG for an infra perspective cc @jonjohnsonjr for the ko side of things
I think it's worthwhile for us to figure out how we support Dockerfile-based stuff in *-contrib
and knative-sandbox
, even if it isn't our happy-path for most folks.
Another option here would be to consider the use of Bazel, which does not require Docker and may have the flexibility needed to even cross-compile things.
Prow can do docker based build using DIND, so technically it's doable if the build/test is docker based
Did we ever considered to move all kafka code to another repo, say knative/eventing-kafka? On this "new" repo we could easily have our custom build process without attaching the one of eventing-contrib and easily work on the merge of the existing channel with the new one
For new eventing-contrib-kafka repo what wold be dependency? Depends only on eventing or also on eventing-contrib?
In the CloudEvents golang SDK we have no plans to support integrations with librdkafka
. So if this is the path the kafka components go, they will need to also rewrite the sdk protocol bindings. The build complications are not worth the effort in the opinion of the sdk authors.
For new eventing-contrib-kafka repo what wold be dependency? Depends only on eventing or also on eventing-contrib?
We currently have a dependency on eventing alone, but are considering converting to a dependency on eventing-contrib (and then a transitive dependency on eventing) so that we share the same KafkaChannel API, supporting the idea of easy developer experience to use on implementation or the other. Should make convergence an easier task then as well hopefully. Still looking at it to see if it seems to make sense and is reasonable to implement
so that we share the same KafkaChannel API
Agree, less we break, better it is
In the CloudEvents golang SDK we have no plans to support integrations with
librdkafka
Understandable, would the correct path be to write a /protocol/kafka_confluent in another repo, and then figure out how to build a version which includes it? Is there some easy way to add external 'protocols'? I haven't looked too deeply into it yet ....
edit - nvm, looks like that should be easy for us to do
@davyodom I think we can host here the protocol, it's not so big. We can put it in kafka/common
Is there some easy way to add external 'protocols`?
Yeah don't worry, for what we need in channels it's 1 data structure and 1 function, and of course the test for these :smile: . I can take this action item when we land the kafka channel code in knative
We have a plan to put this channel into a sandbox repo and enable docker based builds there. I'll close the issue after we have something set up and working over there and add some more detailed information here
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
We got rid of cgo even in the sandbox org, so /close
@pierDipi: Closing this issue.
Problem SAP wants to donate a kafka channel implementation https://github.com/kyma-incubator/knative-kafka, and there is a general agreement in the WG that this is desirable. This channel uses librdkafka from confluent, which then requires cgo builds. Knative build and test infrastructure currently has a dependency on all components being built with ko. Ko doesn't support cgo builds in general, but would work in the specific case that the ko build is executed on the same platform that the base image uses.
Persona: Developer contributing cgo based builds to eventing-contrib
Exit Criteria Build/test infrastructure support cgo projects
Additional context (optional) An option would exist to run ko builds inside of docker containers, but this was considered undesirable by the ko community as it would introduce a dependency on docker which is not currently present.