tektoncd / pipeline

A cloud-native Pipeline resource.
https://tekton.dev
Apache License 2.0
8.48k stars 1.78k forks source link

Binary (local) input type #924

Open markusthoemmes opened 5 years ago

markusthoemmes commented 5 years ago

Openshift builds have a really neat concept of Binary (local) sources. Essentially, it gives you an API to stream binary contents into a build from anywhere (including the local file system).

This opens the door for nice FaaS-style user-experiences, where the user might want to do something like cli create my-large-packaged-nodejs-function.zip, then expect this to be build into a container image and ultimately that image then being pushed to run somewhere (for example Knative Serving).

This flow would be possibly without requiring the user to push this code somewhere and without the user having to install software to build and manage images.

Would that be something that Tekton could support?

bobcatfish commented 5 years ago

This flow would be possibly without requiring the user to push this code somewhere and without the user having to install software to build and manage images.

@markusthoemmes could you give a bit more detail about what exactly this would look like from a user perspective? I'm just trying to wrap my head around where the data would actually be coming from.

Since Tekton Tasks currently execute on a k8s node, would data from "the local filesystem" be data from the disk on the node, or would it be data from the user's machine, which would be uploaded to the node? If it's the later case then it sounds to me like the key to addressing this might be to tackle https://github.com/tektoncd/pipeline/issues/235 (but maybe im misunderstanding!)

vdemeester commented 5 years ago

@bobcatfish I think the initial use-case @markusthoemmes is aiming for is some FAAS-like behavior where you could, as an example, send your code directly to a Pipeline that will, once succeeded, deploy on Knative, without having to rely on any source-code management like a git repository or else.

This is a bit related to https://github.com/tektoncd/cli/issues/53.

This is useful for when a user would like to upload a directory without requiring access to a new data plane (e.g., GCP's Cloud Storage).

bobcatfish commented 5 years ago

Hm okay I think this is making a bit more sense @vdemeester ! So would this be a use case where a user is manually creating PipelineRuns and/or TaskRuns? (cuz if there was some other mechanism invovled, e.g. the CLI, then that mechanism could take care of uploading the source wherever i needs to go)

Or is this more about making it possible to actually store that data somewhere - e.g. ephemerally in the k8s cluster - without requiring some external storage like a git repo or a storage bucket?

vdemeester commented 5 years ago

So would this be a use case where a user is manually creating PipelineRuns and/or TaskRuns? (cuz if there was some other mechanism invovled, e.g. the CLI, then that mechanism could take care of uploading the source wherever i needs to go)

Or is this more about making it possible to actually store that data somewhere - e.g. ephemerally in the k8s cluster - without requiring some external storage like a git repo or a storage bucket?

So, the cli (or a cli) would take care of uploading the source to wherever it needs to go indeed. But yeah, this would be about making the data stored ephemerally in the cluster. Could be a volume, or whatever, and this would/could be referenced as an input resource (of type storage, git or whatever :angel:) so that you could use it as an input to a Task or Pipeline :angel:.

A side effect would be to run easily tekton pipelines locally without having to share anything with the k8s cluster (aka HostVolume, …) — but this is definitely not the primary use case targeted here.

abayer commented 5 years ago

So I was about to create an issue for adding a volume resource type - we've got a use case where we've got one pipeline that clones a git repo, makes some changes or adds new files, generates a new pipeline to actually build/test the repo, and then runs that pipeline. Basically, what we want is to be able to reuse the workspace volume from the first PipelineRun in the second PipelineRun. So I'm wondering whether I should just pursue the idea of volume resources or try to go more general and address this?

abayer commented 5 years ago

After thinking about this for a while and starting an experimental prototype of a volume resource type, I think it makes sense to tackle that on its own. Opening an issue for that.

cmoulliard commented 5 years ago

Your idea is correct @abayer. We need to mount a volume to the tekton's pod created to let a tool such as oc, odo, kubectl, ... to push as binary the code of the project to be built. This is what we did with the project odo or our operator halkyon using the concept of supervisord init container

tekton-robot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with /close.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

zhangtbj commented 4 years ago

Hi @cmoulliard or @markusthoemmes ,

I am not sure do you have any update about this issue?

We also have this kind of requirement that we want to upload some local folder to a Tekton task to build the related container image.

Is there a new InputResource for that or any new solution for that?

Your idea is correct @abayer. We need to mount a volume to the tekton's pod created to let a tool such as oc, odo, kubectl, ... to push as binary the code of the project to be built. This is what we did with the project odo or our operator halkyon using the concept of supervisord init container

Or can we know how to do about the the project odo above ^^

Thanks!

markusthoemmes commented 3 years ago

/remove-lifecycle rotten

vdemeester commented 3 years ago

/lifecycle frozen