Closed jkutner closed 4 years ago
To kick things off, I’m gently leaning towards alternative #2. I think that there’s a strong overarching theme of Cloud Native Buildpacks building an image that can run on pretty much any platform; this is why creating OCI images was such an important technical detail. However, I’ve never seen building (identical) source code on any platform as a goal. From the Cloud Foundry side, we’re looking at a completely decoupled build component (as in decoupled from our own platforms even) that creates images that can be run in any of our PaaS, FaaS, or KaaS abstractions and so I view the inputs as tied to the builder not the platform that the image runs on.
Beyond the good engineering of decoupling at this level, one of the larger requests we’ve received from customers is for promotion of built artifacts. Many customers have multi-tiered deployment architectures (dev, QA, prod, etc.) and today they are required to promote the source code (or in the Java case, compiled JARs) between those environments. The downside to this design is that the application is restaged in every environment making it susceptible to environment-specific variations of the buildpacks. What they’ve asked for is a way to promote a single built droplet from environment to environment ensuring that staging only happens once and the same artifact progresses through each phase. CNB really encapsulates this directly, not by giving CF customers a way to promote droplets, but rather by building a portable artifact (the OCI image) that is their primary artifact for testing.
To drag this back to the issue at hand, the strong requirement I see is portablity of those created images, between deployment environments (dev, QA, prod), between abstractions (PaaS, FaaS, KaaS), and between vendors (PCF, GCE, Heroku), but I haven’t seen evidence that there’s a strong desire for that same portability for source code.
All that being said, I’ve got a laser focus on Enterprise on-prem use-cases and would love to hear the broader view.
I'm torn here.
I'd love for there to be a single, common manifest file across all PaaS. Many of our users move from Heroku to Dokku, decide there is a crucial limitation in Dokku or that they miss a feature, and move back to Heroku (or some other PaaS). It would be great to have a common, buildpack-related artifact such that users don't have to waste time getting started on a given platform.
As an alternative, I would hope the platform-specific file code could be shared, in a similar way to the buildpack @nebhale did with
procfile-buildpack
, but it makes sense that this would remain the "secret sauce" for a given platform.
I also am hesitant to add yet another format to the mix. For figuring out what buildpacks to run, Dokku tries to emulate Heroku, which supports the following:
buildpacks:set
commandapp.json
.buildpacks
heroku.yml
BUILDPACK_URL
env varbin/detect
It's not clear to me what order these work in, and is confusing to users on their platform (due to the various ways in which they interact). Adding yet another file into this makes it even more confusing, potentially without any real benefit if other PaaSs don't support it. The same problem exists for Procfile
and other such manifests.
I like the idea of having a defined "hook" to manipulate buildpacks. I think manipulating the processes in a launch.toml
to match a Procfile
or heroku.yml
can be done separately, but a clear way of manipulating the list of buildpacks to run would be ideal. I'm not sure what form that should take though.
It would be great to have a common, buildpack-related artifact
I think that, even in the best of outcomes, we can't solve the problem as extensively as you'd desire. I think it's a complete non-starter to define a standard file, then explicitly limit it to that standard. All reasonable scenarios result in this being the required kernel but platforms would use this file, in lieu of the 8 other files we all have (😜), as the one true place to add additional information. In other words the file would be open for extension.
If this is the case, I think the problem of hopping platforms, finding that your feature isn't there, and hopping back continues to exist. And this specific point, that I explicitly do not think source code can hop platforms without modification, is behind my (loosely-held) desire to not standardize it at all.
Proposal: introduce a standard config file that all platforms should support (cnb.toml
?), but allow platforms to accept the same configuration from their own existing configuration files (manifest.yml
, app.json
, Procfile
, etc.). That way an app can be maximally portable, but it doesn’t need to be.
discussion moved to buildpacks/rfcs#25 and buildpacks/rfcs#32
Buildpack consumers often need to customize the commands used to run their images, run the same image with multiple different commands, or define the buildpacks they want to run on their app. It's also common to want to keep this information under source control with the application code (which helps when forking an app).
To solve this, we may need to introduce an application descriptor file to the v3 spec. In v2, this exists in the form of the
manifest.yml
,Procfile
,.buildpacks
, andapp.json
files.The possible elements included in a v3 application descriptor file might be:
launch.toml
)MAVEN_OPTS
, which is honored bymvn
).The application descriptor might be named something like:
app.toml
cnb.toml
manifest.toml
launch.toml
(we would need to reconcile this with the existinlaunch.toml
)cnb.xml
Alternatives
pack
CLI uses/parses and passes to the lifecycle as values.heroku.yml
file that contains a list of buildpacks or env vars that are interpreted by Heroku.heroku.yml
would not work withpack
or another platform.--entrypoint
option.Dockerfile
that has onlyFROM
andENTRYPOINT
lines.