Open imjasonh opened 6 years ago
/cc @jchesterpivotal @sixolet @julz
I didn't bring up that you can embed sourcecode into Concourse tasks with a little bit of work. It's just evil. The usual way to do the edit-upload-run cycle in Concourse is to use fly execute
.
For large local source, I suggest creating a temporary nginx (or whatever) pod on the cluster, and just use kubectl cp
to upload the source, as I'm doing in CBI: https://github.com/containerbuilding/cbi#https-context
( POC for Skaffold integration using nginx: https://github.com/GoogleContainerTools/skaffold/issues/596 )
Also, for Docker and BuildKit builder, the BuildKit session API could be used for incrementally uploading the source without any extra storage.
Another option would be to integrate with the new proposed knctl
to upload source (where + how TBD), then create a build that points to it, and even a configuration that points to that build, for full source-to-service smoothness.
We could call it knctl push
! (And I really like the idea)
spitballing a bit, I wonder if we might want to have a Source
CRD. This would be a bit similar to the custom
source type, but would allow folks to define custom sources, which could include things like blobstores, or even a local volume.
That way, you could do something like:
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: example-build
spec:
source:
host-volume:
path: /my/local/source/code
step:
...
which would let local development happen without an upload. Or
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: example-build
spec:
source:
blobstore:
digest: abcef
step:
...
That'd give lots of flexibility for folks to experiment with nice ways to upload and manage source.
With this, we could easily ship a simple blobstore, blobstore Source and an upload command out of the box for easy experimentation. I'm a bit agnostic as to whether we'd want to do that as just a sample in the docs with nginx as the store and rsync as the upload command (as @AkihiroSuda suggests) or whether we might want to develop a first-class component for source upload/sync with bells and whistles (a la the cf resource_match stuff) as a "batteries included" thing. We could quite plausibly start with the former (just a nice sample in docs) and then think about the latter later?
FWIW I quite like the idea of a knctl push
(it's hard to have spent time in cf-land and not like the idea of knctl push
), but it seems a little difficult to do it in an unopinionated way, because you need some sort of blobstore there, and push would differ a lot depending on what that looked like. I'm instinctively a little dubious about having knctl end up with too much of an opinionated workflow that assumes a particular store -- it seems to cut against the idea of knative being a set of primitives if we ship too many opinions in the 'core' organisation, and also seems a bit too likely to cut off innovation, particularly here where there's a fair few good options.
spitballing a bit, I wonder if we might want to have a Source CRD.
In my head I've been putting off the idea of implementing a Resource CRD (to go with Pipeline). The logic for running a Concourse resource is pretty simple, providing the logic to inject stuff into /opt/resource/in
doesn't strike me as a long way off the beaten path.
The Concourse team have rebasing on Kubernetes as their next major epic on the Runtime track, so we might get this for free.
I'm instinctively a little dubious about having knctl end up with too much of an opinionated workflow that assumes a particular store -- it seems to cut against the idea of knative being a set of primitives if we ship too many opinions in the 'core' organisation, and also seems a bit too likely to cut off innovation, particularly here where there's a fair few good options.
I hear you saying "cf
plugin" in a funny accent :)
On second thoughts, I've felt that the proper boundary to Serving is container images, however it is they come to be generated (including on-cluster with Build). The riff team have in mind to fold local buildpacks into their CLI; depending on how people feel I can't see why that wouldn't be a useful thing to adopt into knctl
. Buildpacks are opinionated, but it's a well-proven and widely-understood opinion at this point (and hopefully will be adopted into the CNCF soon). What would be new vs Cloud Foundry would be shipping a container image instead of source.
basic implementation for build from source is in knctl v0.0.4 (https://github.com/cppforlife/knctl/releases/tag/v0.0.4). it looks something like this:
$ knctl deploy \
--service simple-app \
--directory=$PWD \
--service-account serv-acct1 \
--image index.docker.io/<your-username>/<your-repo> \
--env SIMPLE_MSG=123
basic implementation (nothing fancy) didnt require any changes to knative. it copies over source into source container of the build (uses CustomSource instead of GitSource) [1] and then signals build to continue on with building [2].
[1] https://github.com/cppforlife/knctl/blob/062a4e46af9fce7d6cb9a378bb930598d50e9ee1/pkg/knctl/build/build_spec.go#L73-L84 [2] https://github.com/cppforlife/knctl/blob/062a4e46af9fce7d6cb9a378bb930598d50e9ee1/pkg/knctl/build/cluster_builder_source.go#L90
@cppforlife your approach with knctl looks pretty clever. Have you thought about how you might use a persistent volume and a tool like rsync (or unison or casync or dsync) to speed up the transfer?
I'd really like to use Knative Build together with skaffold dev
so I can build very quickly using a kubernetes cluster nearby on my LAN.
Users iterating on an app often want to deploy from source on their workstation, without having to manually commit each incremental change to source control or upload to object storage and modify their config YAML to build it.
Some options discussed in the latest Build WG meeting:
Build
's existing source options, for example:For purposes of illustration, this command would bundle up the contents of
src/
and upload them to some pre-configured location (object storage or source control), modifybuild.yaml
to specifysource
pointing to that location, andkubectl apply
it. This is just for illustration and is not intended to be a real proposal for such a command, or aknative
CLI in general.Build
today could make this even faster.This effectively encompasses the CLI tooling described above, but with all the existing featureset and infrastructure that Skaffold has today, including continuous dev deployment.
In this example,
inline
is amap[string]string
containing source contents that should be placed into the/workspace
and built according to thesteps
. This is just for illustration and is not a real proposal for such an API.The benefit here is that users with very small source don't need to upload that source to object storage or source control in order to build it. The source is stored inside the
Build
config in etcd.However, etcd imposes a size limit on objects stored within it, and repeated deployment of source in this fashion might cause problems. Additionally, source stored in this fashion is not versioned (unless the YAML config itself is versioned), so users who find that the last working version they deployed from inline source an hour ago is no longer available, or is hard to recover. Describing complex source trees or many files within YAML is also likely to become a pain.
But, allowing users to describe very simple programs inline in their YAML configs could be useful for small apps, and should at least be considered.