Open ajhunyady opened 2 years ago
Environment variables might be useful - for overriding things -
I was thinking if we add$KUBECTL
override to indicate which k8s provided kubectl we use if not kubectl from $PATH -
https://github.com/infinyon/fluvio/issues/2143
We could perhaps first automatically try to detect which environment is in use and then override this if needed via say:
$K8S
This would allow all sorts of testing and in CI it can trigger the switching between different environments to test the compatibility between the different k8s - by flagging that env - managing a file would be perhaps difficult logistics and potential friction to the user.
The engineering team has discussed this a couple of times. I think there should be a long term goal (subject to change) for how to do connector development as well as connector use.
The two main flows are: 1) Creating a new connector, running the connector you're working on, and testing the connector. 2) Using a connector, be it official or unofficial.
Assumptions:
fluvio connector dev new my-connector [--language python|rust|node]
cargo-generate
to generates something like:
Cargo.toml
with dependencies of fluvio
, fluvio-connectors-common
metadata.yaml
,
src/main.rs
,cd ./my-connector
fluvio connector dev [--start] [--conifg ./file.yaml]
metadata.yaml
cargo build
for the docker container - bundles and builds fluvio connector imagetest-connector-config.yaml
using the tests from metadata.yaml
- creates a new connector using test-connector-config.yaml
fluvio consume <topic>
The metadata.yaml
would be something like:
version: 0.1.0
name: "ajs-github-stars"
description: "foobar"
license: "Apache"
image: "docker.io/aj/ajs-github-stars"
schema:
param: `<Something that helps validate>`
When using this connector in dev mode via something like fluvio connector dev --start
the dev tooling should produce a connector yaml that looks something like:
version: dev
name: ajs-github-stars-dev
uses: ./
topic: ajs-stars-dev
parameters:
github-repo: aj/foobar
secrets:
github-key: foobar
Assumptions:
Flow:
fluvio connector create --config ./using-my-third-party-connector.yaml
The the using-my-third-party-connector.yaml
should be something like:
version: 0.1.0
name: ajs-github-stars
uses: file://./aj/ajs-github-stars.yaml
topic: ajs-stars
parameters:
github-repo: aj/foobar
secrets:
github-key: foobar
The uses
parameter is the hardest part of this. We've discussed in depth about disliking type
.
In the developer experience, this would point to a local directory for use. In the user case this would point to things like:
github.com/<user>/<repo>
https://<website dot com>/
file://./location
infinyon://<official-infinyon-connector>
where each of these would have a metadata.yaml
as described above.
Anyway, I think we should turn this into a relatively long running goal with intermediate steps along the way.
uses
parameter that can use a few different metadata.yaml
sources.cargo-generate
to create a new connector is a stand alone task (but is relatively low priority).fluvio connector dev test
and fluvio connector dev start --example <foo>
would be another stand alone task.
I was going through step-by-step in our newsletter https://nightly.fluvio.io/news/this-week-in-fluvio-0020/ to build a connector with K3D.
When I got to the last step, it was difficult to figure out how to convince the connector to load the image from the local registry.
infinyon/fluvio-connect-cat-facts
But in the connector file is
cat-facts
:We should loosen the restriction for
image name
for localdevelopment
environment. Users should have the flexibility to name theirimage
whatever they want when they push tolocal registry
.Furthermore, the development environment needs to support multiple local Kubernetes distributions such as
k3d
andminikube
which seem to have different conventions for handling local images.For example in
k3d
I have multiple clusters:My image name in
k3d
is:I think we should create a
development
section that allows the users to configure theregistry
and all relevant artifacts to required to identify theimage
:For minikube:
Docker: