gardener / landscaper

Development of Landscaper - A deployer for K8S workloads with integrated data flow engine.
Apache License 2.0
56 stars 33 forks source link

Homebrew like user experience #60

Closed kramerul closed 1 year ago

kramerul commented 3 years ago

How to categorize this issue?

/area usability /kind enhancement /priority normal

What would you like to be added:

I would like to have an experience like homebrew to install artefacts into a kubernetes cluster

Therefore, blueprints should:

One approach might be to change the architecture in the following way:

landscaper

Why is this needed:

Ease the usage of landscaper.

schrodit commented 3 years ago

Hi kramerul,

thanks for the issue.

If I get your request right, you want to have a cli that can install blueprints without the need for a installation/landscaper cluster?

If my assumption is right, then that cli will replace the landscaper controller (scheduling + data management) and the deployers. This is not possible because:

we are currently working on a cli to ease the creation of blueprints and also to lower the barrier to get started with the landscaper. @In-Ko and team will work on this topic. Maybe you can share some future plans.

kramerul commented 3 years ago

If I get your request right, you want to have a cli that can install blueprints without the need for a installation/landscaper cluster?

Yes

If my assumption is right, then that cli will replace the landscaper controller (scheduling + data management) and the deployers.

No. Landscaper controller and cli will live in parallel as shown in the architecture diagram.

provider specific code is outsourced to specific deployers e.g. helm deployer to install helm, terrafaorm deployer to install terraform. The cli would have to implement all the deployer specific code which contradicts with the landscaper extensibility.

Extensibility could also happen on language (go) level. I would expect that all these deployers should also exist as go code. Even using the helm command directly as subprocess during installation would be OK.

blueprints are just a template that specifies how a service is installed with a given configuration. This configuration has to be provided, either from the operator itself or from other already installed services. The landscaper uses the installation as runtime resource to configure such values which are then also used for scheduling. A cli would be needed to provide the same configuration and would also need to keep track of other available services with exports in the system.

Thats correct. But I think it would be worth to change this paradigm. The installation values could also be managed in the target cluster. The exports could also be written to the target cluster (and backup-ed in the management cluster)

the landscaper and its deployers were designed to be self-healing, reconciling with the possibility to install and manage software also behind firewalls. This is possible via deployers/controllers, but won't be possible with only a cli.

The landscaper-operator could be self healing in the same way as for example a kapp-controller or a helm-operator. The landscaper-operator could run time based reconciles using the landscaper-go-api (as shown in the architecture diagram).

If you only have cli there would be no benefit over just running helm install or terraform deploy from a machine

There will be many benefits over helm:

kramerul commented 3 years ago

For me the main question is:

Would you use a system which manages software installations on your computer, but requires opening a central corporate service to administrate the software on your computer, or would you prefer using brew install?

Diaphteiros commented 3 years ago

I get your point and I agree with you that it would be nice if the landscaper could be used in that way, but there are some differences between what homebrew does and what the landscaper is intended to do that make your analogy somewhat flawed.

To name two points as an example: software installed by homebrew usually doesn't require any user-provided configuration and is installed on only one system (the one homebrew is on). The landscaper was created to manage multiple Gardener landscapes, each of which needs configuration that can easily span several hundred lines of yaml and affects not only a single cluster but several clusters at once. Especially for the 'distributed' part, I see problems if there isn't a central operating cluster. Do I have to push multiple blueprints to multiple clusters to get the system going? Where are config and state stored?

kramerul commented 3 years ago

To name two points as an example: software installed by homebrew usually doesn't require any user-provided configuration and is installed on only one system (the one homebrew is on).

Thats exactly how software installation should work.

The landscaper was created to manage multiple Gardener landscapes, each of which needs configuration that can easily span several hundred lines of yaml and affects not only a single cluster but several clusters at once.

In my point of view, the default case should be that you don't need a single line of configuration to install software. For complex systems this might be different, but it should not be the default case. Also my proposal could span several clusters.

Especially for the 'distributed' part, I see problems if there isn't a central operating cluster.

I do not argue against a central operating cluster. I argue to split the landscaper into two layers. One layer which eases the usage for the developer (cli) and one layer on top which controls complex installations using a central operating cluster. I'm also convinced that such a split will improve the entire product. Testing would become much easier. Simple cases could be tested using the cli.

Do I have to push multiple blueprints to multiple clusters to get the system going?

Ideally I have to push only one blueprint, that contains all dependencies. For multiple cluster, this might look different. Perhaps this could only be solved by a central operating cluster.

Where are config and state stored?

I would store the config and state in the target cluster(like helm does). You are also using helm to install some blueprints. In that case you also trust helm that it saves the state correctly.

schrodit commented 3 years ago

Extensibility could also happen on language (go) level. I would expect that all these deployers should also exist as go code. Even using the helm command directly as subprocess during installation would be OK.

Currently most of the deployers are maintained by us. But to scale we imagine that everyone can simply write a controller for a deploy item and extend the landscaper without touching the landscaper core. This would not be possible with a cli tool that includes the logic in its go code.
In addition we would force people to write go (I know it is the commonly most used language to write k8s controllers) but not only for a k8s controller they would also have to comply with the cli tool interface.

Even if we would consider that we want to have extensibility in the cli tool, we would come up with something very similar to terraform (Terraform handles the scheduling of components and their data flow and allows for extensibility via providers).

I do not argue against a central operating cluster. I argue to split the landscaper into two layers. One layer which eases the usage for the developer (cli) and one layer on top which controls complex installations using a central operating cluster. I'm also convinced that such a split will improve the entire product. Testing would become much easier. Simple cases could be tested using the cli.

I have to disagree here. Splitting the landscaper into a operator and into a cli would not make the testing easier. The testing of the cli would be simpler because the functionality is limited but we now have two completely different scenarios how blueprints can be installed. Both cases would need to be tested.

The scenarios are different simply because there cannot be a landscaper go API as a operator and a cli tool cannot work the same way. The lifecycle of a cli based tool that work on a single cluster is not the same as a operator that reconciles installations and outsourced all the provider specific code to other controller (deployers).

In my point of view, the default case should be that you don't need a single line of configuration to install software. For complex systems this might be different, but it should not be the default case. Also my proposal could span several clusters

The landscaper model completely supports such a scenario. Even multiple blueprints can be pre-bundled and preconfigured into an aggregated blueprint. Then the enduser simply only installs that one blueprint (in addition to the landscaper)

I would store the config and state in the target cluster(like helm does). You are also using helm to install some blueprints. In that case you also trust helm that it saves the state correctly.

We are using helm, but we only use helm template. The lifecycle is completely managed by controllers that work on the templated manifests. Just because we don't had a good experience with helm install in the past and rather do the management on our own (see also the gardener-resource-manager).

kramerul commented 3 years ago

The discussion arrived at a stage, where I am not able to refute the arguments technically.

I was hoping that I could improve the usability a bit with this proposal. But it seems that constraints from the existing implementation are more important than usability. But maybe I also have a wrong picture of usability.

I am still convinced that such a thing would be feasible (other products like kapp also have this split between cli and controller) and could significantly improve the usability of the product.

schrodit commented 3 years ago

I was hoping that I could improve the usability a bit with this proposal. But it seems that constraints from the existing implementation are more important than usability. But maybe I also have a wrong picture of usability.

Please don't get me wrong. I think it's important to have good user experience and we also want to simplify the usage of the landscaper.
But we believe that the controller/deployer concept is essential as we otherwise would lose a lot of our functionality and benefits that only cli tool cannot provide.

We rather plan to make the usage of the landscaper and blueprint as easy as possible. One planned action is to create a cli command that installs the landscaper and some core deployers. Then with another command a installation can be created based on a given blueprint.

This would reduce the necessary actions for simple blueprint to landscaper-cli init and landscaper-cli create <my-clupeint ref>

cc @In-Ko @achimweigel @robertgraeff

kramerul commented 3 years ago

One planned action is to create a cli command that installs the landscaper and some core deployers.

That would make things in deed a little bit easier. But I would like to note that this has found little acceptance at helm2 (tiller).

I think the main point from the homebrew experience got completely lost in the discussion: dependency management.

As far as I can see, there is currently no possibility to manage dependencies (to already installed blueprints) with blueprints.

In our case, we would like to install/uninstall kyma and/or cf-for-k8s independently into one cluster. They share a lot of common sub packages like istio, a docker registry or fluentbit. Unfortunately some of them can exist only once (e.g.istio) in the cluster.

image

Therefore, I would love to see the following features

Diaphteiros commented 3 years ago

Hm. What you want is already possible in parts and difficult in other parts. landscaper does dependency management via imports and exports. If a component A imports something that is exported by a component B, then A depends on B. The idea behind this was basically what you want too - let's use an ingress controller as an example: there are multiple components that need an ingress controller in the cluster - basically everything that uses ingresses needs exactly one ingress controller to be present and it needs to know the ingress class (how to annotate the ingress so that the ingress controller takes care of it). However, these components that depend on an ingress controller usually don't care which ingress controller it is (nginx or something else). So, the components don't depend on nginx, but instead they import a specific value, e.g. ingress-class. While this has the advantage, that multiple components can easily use the same ingress controller, it requires some kind of convention (each ingress controller has to export the ingress-class value, otherwise it won't be recognized). So, to come back to your example, an istio component could export some value called istio.version and other components, that need istio, could depend on this value and also change their behaviour depending on the actual version of istio that is deployed. The 'deploy this component if it isn't there' part can also be achieved within our model, although I'm not sure whether it can be done in a nice way. Maybe we need to add some features there. The 'reference counting' is not possible right now. If you delete a component, everything that was deployed by it will be removed, independently of whether something still depends on it. The only way I see to resolve this is connected to the previous point - if we have, let's say, dedicated 'ensured' components (which will only be deployed if a specific value isn't already exported by some other component), then we could make deletion of installations work in a way that they don't remove 'ensured' subinstallations if something still depends on their exports. However, this is not possible in the current model and might be difficult to add.

@schrodit What do you think about a dedicated way of marking blueprints as 'ensuring', causing the whole blueprint to do nothing if some values are already exported by something else? Might be difficult though, as this introduces something similar to optional exports which we neither have nor want, because this might break some constraints ... but I see a use-case here that will probably occur quite often.

kramerul commented 3 years ago

@Diaphteiros, sorry, it's hard for me to follow your argumentation, because you introduced a component. What is a component? Is it a blueprint?

Diaphteiros commented 3 years ago

Sorry for the confusion. We avoided the term 'component' in the landscaper naming scheme, because it is too generic. I used it for the abstract concept of a set of deployments belonging together (e.g. istio could be one component). In landscaper terms, what I called 'component' is usually represented by one blueprint.

kramerul commented 3 years ago

I had a look at the code and was not able to find anything that would indicate dependency management. I would have expected that in a more prominent place.

vlerenc commented 3 years ago

I totally agree that a CLI should and has to be provided to make the user experience a good one, but please do see where we are coming from: we need to replace our managed service installer and community installer with a single new solution (no double-maintenance anymore), that also allows to deploy more than just that/us (i.e. an entire stack), with conceptional improvements that prevent problems we have experienced in the past 4 years (learning from our mistakes and making sure, the coming solution avoids them by design). Local usage is not in focus in the beginning.

the cli should deploy directly into the target cluster (without intermediate CRs in a management cluster)

When I read that, it is important how you meant it, because there will never be a Landscaper without a management cluster. The Landscaper is a Kubernetes-native tool and will always require a cluster. But target and management cluster can be the same, so you might get your desired behaviour still (just like the Hub controller can be deployed outside or inside of the target cluster where it manages the installed software packages). But the Landscaper will never run significant parts of its logic in the CLI. That is an anti-pattern breaking the Kubernetes level- (vs. edge-) triggered philosophy of declarative resources. Plus, it would re-introduce one known problem we wanted to eliminate by design here: another (local) ghost entry vector manipulating a landscape.

So, your architecture proposal cannot be implemented as it would violate all our learnings. Its code will never run outside in a CLI – this is no Terraform or Helm clone and we consider this an anti-pattern. In fact Helm 3’s goals were similar in the beginning, but they backed off and we were very disappointed they did so - it devalued Helm 3 for us significantly. Now you may say, Terraform and Helm are successful and you are right. But they serve a different, more imperative user-centric goal. The landscaper doesn’t – we don’t – and that’s what it is. We have a very clear design goal here – in fact the same that made Gardener so reliable. The bits and pieces we still use of Terraform (for infrastructure) and Helm (never the CLI, but the templating) hurt us even today. It is exactly what we don’t want to do, put code into a CLI to do the deployments.

If you accept that, maybe we can help you and ourselves here, because there could and should be still a CLI that simplifies the interactions (maybe rather a kubectl plugin than a new CLI btw.) that gives you that ease of interaction. Just like Helm, it could install the “server component” and work with the resources (please don’t compare it to Tiller though, because that was garbage and served another purpose that became idiotic after the advent of TPRs/CRDs). But the model of interaction is and remains Kubernetes-native and the Landscaper will always need compute (as a controller), which is why the CLI isn’t, never was, and cannot be the target for the deployment routines themselves.

No. Landscaper controller and cli will live in parallel as shown in the architecture diagram.

Then everything is good, but the image in the description shows it outside the management cluster and as said, target and management cluster can be the same and in our cases will even always be the same. Why would we run another cluster for the Landscaper and Gardener? No, you would start with a kind or whatever cluster, bootstrap your first (Garden) cluster, pivot the Landscaper into the Garden cluster and take over from there. You would always be able to restore or recreate that cluster, if need be, from a kind or whatever cluster again.

Extensibility could also happen on language (go) level.

Could, but that’s then not the Landscaper. The Landscaper follows the same design decisions as Gardener did and that helped us a lot. All resources are out in the cluster, can be inspected and manipulated with kubectl, and are watched and materialised by the Landscaper. What you suggest is a completely different tool, I fear.

time based reconciles

That’s just a detail, but that’s certainly not the way how a controller/an operator should be written. That makes me think about Terraform and Helm and how one tries to make this incompatible philosophies work together. This will not be satisfying (and we know that as we do that in our Gardener infrastructure reconciliations and it's very problematic for us). So, the proper way, of course, is for controllers to use watches and limit what they do to what has changed and dependencies of that. You don’t want to run the SCP LSS installer on an endless loop again and again either, at least I don’t. That’s another anti-pattern trying to put a CLI into a pod into a cluster – it’s not the Landscaper design.

Your proposal drastically deviates from all good Kubernetes practices and we could not implement it, because we fundamentally do not believe in that model. Gardener is 100 % Kubernetes-native and that’s also the goal for the Landscaper. This design has helped us tremendously in operations of vast landscapes. Again, we can and should help and improve the usability from the command line, but it will be first and foremost a Kubernetes controller, because we see here enormous value from past and present experience. And if Kubernetes is no longer a thing somewhen in the (hopefully far) future, so Gardener and the entire Kubernetes-based stack on top won’t be and we will have to use or design whatever comes after that.

Would you use a system which manages software installations on your computer, but requires opening a central corporate service to administrate the software on your computer, or would you prefer using brew install?

Landscaper is no “corporate service to administrate software” at all – it never was. Why do you think that? We do not plan to offer a “corporate” Landscaper endpoint – it’s no Jenkins as a Service either. It’s a Kubernetes controller that we can and will sweeten up with a CLI, but it is a Kubernetes controller that you can install wherever it fits best.

And no, brew install is another anti-pattern. You can do that locally and of course I enjoy the experience, but we speak here not about a machine you operate as a human being, but about a landscape that self-operates/is run by a machine. There, I explicitly do not want brew install - I don’t want human operators at all, I want machine operators only. That’s how we few people run many thousands of clusters across 100+ regions worldwide. It’s the only way to scale. At no point in time a CLI is used and Landscaper will never become one primarily (only the convenience part may/will, but never the main code of the thing).

In my point of view, the default case should be that you don't need a single line of configuration to install software

Locally, sure, managed, never. How is that realistic or where do you see that in real life? The landscape.yaml or LSS, no gold standard certainly, have to carry and provide the exchange/import/export of configuration. You do not install a monolith here or a single desktop app. I suggest we stop the homebrew comparison. It doesn’t fit. It also doesn’t fit from the dependency angle, because homebrew usually installs inert components/libraries/pieces of software that are “used” by code on demand, but don't run by themselves. LSS or Landscaper install active components that run and for that they need configuration. That’s why Maven or homebrew dependency management is a gazillion times simpler. Landscaper is not deliberately trying to complicate things. Deploying and wiring a software stack is no trivial task. It's challenging, also because the software stacks we need to deploy in the future are being developed by many parties (and you can’t develop something as a monolith anymore when it reaches a certain size). That’s so much more demanding than a desktop app or a few libraries that you install as dependencies. If you manage to pull that off, deploy something complex with something like homebrew, I would be your first diciple and follow you everywhere. 😉

I was hoping that I could improve the usability a bit with this proposal.

Yes, you did. You brought awareness how much we focused on the machinery, leaving out how people would work with it and if they don’t get sufficiently enabled, adoption will be a problem and mistakes will be made. So do not despair, that goal is reached, we have heard you. 😊

dependency management

That is one of the main goals and if we have failed you here, let’s discuss how we can improve, sure. But then it should happen in another ticket. This one is called “homebrew like user experience” and while we want and will provide some convenience on CLI level (and also the dependencies argument of homebrew doesn’t fit here), the core of the Landscaper is by design a controller that requires compute and runs in a cluster. We will not move this into a CLI – someone else would have to develop another tool, if you want that, because we fundamentally don’t believe in this approach for what we try to achieve here.

kramerul commented 3 years ago

Hi @vlerenc,

thank you for your detailed answer. It was a pleasure to read your comments.

Unfortunately, I still disagree in some points. But the world would be so boring if everyone had the same opinion.

You often use gardener as an idol. I also love gardener and the concepts, which are used there. But in my point of view software installation is a different domain. For me it's the same difference as installing an operating system or installing software on top an operating system. Therefore the concepts of gardener might not fit. But I would like to end it up with this statement.

If you manage to pull that off, deploy something complex with something like homebrew, I would be your first diciple and follow you everywhere.

Challenge accepted 😉. What would be the task? Would a modular installation of kyma and cf-for-k8s as shown here be enough?

That is one of the main goals and if we have failed you here, let’s discuss how we can improve, sure. But then it should happen in another ticket

I will open another ticket for this: #65

vlerenc commented 3 years ago

The 'reference counting' is not possible right now. If you delete a component, everything that was deployed by it will be removed, independently of whether something still depends on it. The only way I see to resolve this is connected to the previous point - if we have, let's say, dedicated 'ensured' components (which will only be deployed if a specific value isn't already exported by some other component), then we could make deletion of installations work in a way that they don't remove 'ensured' subinstallations if something still depends on their exports. However, this is not possible in the current model and might be difficult to add.

@Diaphteiros @schrodit "it will be removed, independently of whether something still depends on it" may actually not happen, I believe. We discussed that we need import/export validation or else we are not better than what we have today in that respect. We cannot have careless configuration break a landscape. Such a breaking change should therefore be detected and rejected. I know, you said it would be difficult and we had once a follow-up discussion, but what is the current state?

vlerenc commented 3 years ago

You often use gardener as an idol. I also love gardener and the concepts, which are used there. But in my point of view software installation is a different domain.

@kramerul Sure, cluster management and landscape deployment are different beasts, I fully agree. However, deep down they share some common properties. We simply believe in full automation (which is why a CLI is not on our mind from production PoV). Wrapping CLIs in CronJobs neither scales nor even works satisfying (error behavior), which we know from past experience all too well. Having a watchdog a.k.a. controller (with its own compute) helps the system stay alive (in LSS Concourse only gets active if the sources change, not if the landscape changes, e.g. accidental ops change, infrastructure change, etc.; with imperative tools such as homebrew, helm, or terraform it’s even worse than that and nothing happens anymore after invocation until a human decides that something needs attention).

We have not so much targeted a human user or single cluster. We have targeted the automated deployment and continuous automated operations (therefore controller-based) of vast landscapes. Explicit wiring between deployed components. Validations of these before deployments. Proper roll-backs. Proper notions of full or partial deployment runs. Hooks for migrations. No ghost deployments by always having the desired state in the management cluster. These things concerned us most.

kramerul commented 3 years ago

Just for curiosity: Are you really watching all resources, which were installed? Doesn't this put too much load on kubernetes or opens too much connections? I know that even kapp has several problems doing this during the short phase of installation.

vlerenc commented 3 years ago

Specify dependencies between blueprints (e.g cf-for-k8s depends on istio) Specify versions ranges for the dependency (e.g cf-for-k8s requires istio with a version greater than 1.7) If I install one blueprint, which depends on another, the missing blueprint is automatically installed One blueprint should be able to read installation values from a dependent blueprint (e.g. cf-for-k8s needs to get the credentials for an already installed docker registry, which was installed before, triggered by the kyma installation). It would be even better, if cf-for-k8s could trigger the creation of new credentials. I'm not sure if there should be some kind of reference counting (istio should remain in the cluster, if cf-for-k8s is uninstalled but kyma is still installed)

@kramerul All good points. Thanks for opening #65. Let's discuss there.

Challenge accepted

:-)

What I meant was: You were comparing installation of dependencies by homebrew, i.e. inert components such as libraries, with live components in a landscape that need configuration. An inert library on disk or whatever doesn't need any wiring. The wiring comes by the executable that invokes its APIs in the form of function parameters at runtime. For landscapes, the runtime starts immediately with the deployment, which is why the two are different in practice. It's far easier to manage maven, pip, npm or similar dependencies than runtime dependencies that need to be properly set up to start operating. That's at least our "felt pain" in landscape management. We have broken landscapes now and then by transporting components, but not adding or migrating it's configuration and defaulting doesn't really always work and is in fact another pain point. I think @mandelsoft once said, we should drop all default behaviour as it shadows what one should know and configure explicitly. Anyway, it's certainly never all black and white, but it is in real life painful to get a complex system running and keep it running.

Just for curiosity: Are you really watching all resources, which were installed?

That's the idea, yes.

Doesn't this put too much load on kubernetes or opens too much connections? I know that even kapp has several problems doing this during the short phase of installation.

Hmm... which problems does it have "during the short phase of installation"? In any case, that's what controllers do, they watch resources and act upon them. Kubernetes has excellent support for that which goes deep down to the lowest level in ETCD. It's also what we do in hundreds of controllers. We watch everything in a Garden or Seed cluster and have therefore many controllers with massive amount of watches. In comparison, the number of landscape artifacts strikes as small.

kramerul commented 3 years ago

Hmm... which problems does it have "during the short phase of installation"?

There are often communication losses to the kubernets API server. The kapp repo is full of code, where all kind of these errors are handled.

schrodit commented 3 years ago

The 'reference counting' is not possible right now. If you delete a component, everything that was deployed by it will be removed, independently of whether something still depends on it. The only way I see to resolve this is connected to the previous point - if we have, let's say, dedicated 'ensured' components (which will only be deployed if a specific value isn't already exported by some other component), then we could make deletion of installations work in a way that they don't remove 'ensured' subinstallations if something still depends on their exports. However, this is not possible in the current model and might be difficult to add.

@Diaphteiros @schrodit "it will be removed, independently of whether something still depends on it" may actually not happen, I believe. We discussed that we need import/export validation or else we are not better than what we have today in that respect. We cannot have careless configuration break a landscape. Such a breaking change should therefore be detected and rejected. I know, you said it would be difficult and we had once a follow-up discussion, but what is the current state?

You're this is not correct. Currently if a installation a exports some value that is imported by another installation b. Then a cannot be deleted.

I think what Johannes wanted to say, is that if you have a installation with aggregated blueprint and the installation is deleted, then also all subinstallations are deleted.

vlerenc commented 3 years ago

@kramerul Well, I don't know what their problem is, but we don't see/have that. We run many, many thousand controllers all the time (just within one seed that runs 250 control planes or whatever, we have many controller managers and even more controllers per control plane) and have watches on practically everything. There are important details, though, like e.g. watch cache sizes, etc. If you do it right, Kubernetes + ETCD are pretty awesome in this and you get only notified when something changes without having to poll the hell out of the cluster (otherwise our seeds wouldn't support that many shoots or our Gardener wouldn't keep track of what our end users are doing in the garden cluster). It's also what the control plane controllers are doing. So, the kapp controller must have other problems, but controllers and watches is at the core of Kubernetes.

Diaphteiros commented 3 years ago

I guess I got something mixed up there. Sorry for the confusion ;-)

mandelsoft commented 3 years ago

May be, let'ts return to the initial desire to get something like a homebrew (or any other linux based package manager) experience.

There are at least two major differences in the use case for package managers and the landscaper:

So, the landscaper in best case uses blueprints as basic installation templates, that are basically landscape layout agnostic, which are composed by aggregated blueprints, which describe dedicated installation scenarios, that again just make partial layout assumptions, and can agai be composed to more complex landscapes. This aggregation process finally ends in a blueprint describing one dedicated landscape with its own dedicatd layout. This is the general scenario the landscaper and its elements are designed for.

Every such final scenario could be seen as one dedicated target environment for a package manager. So, for every scenario (or landscape layout) a potentially complete landscape blueprint describes all the wiring and installations (with blueprint versions).

A package-manager-like experience could therefore only be provided for every such dedicated (may be parameterized) landscape layout, separately. Meaning for every such layout an own package tree could be provided. But these packages then would not describe software anymore, but dedicated instances (dedicated databases, servers, etc with maybe foreseen logical installation targets). Basically this means to split the complete landscape blueprint into smaller parts (let's name them modules consisting of a set of DataObjects and Installations together with dependencies to exposed DataObjects by other modules. Here also, version requirements could be established.

The landscaper provides all the elements necessary to design such higher level elements that could be used to support dedicated foreseen landscape layouts. But it is not the task of the landscaper to provide such elements. This could be described by a dedicated module CR taken from a landscape repository and mapped to the appropriate landscaper elements on demand by a dedicated controller. Such a model could re-use all the blueprints and installation mechanisms from the landscaper to finally execute the installation of incrementally selectable modules from a landscape repository into a dedicated landscape instance following the foreseen landscape layout (may be with parameterized logical nested target environments (i.e. k8s clusters)).

kramerul commented 3 years ago

Hi @mandelsoft,

maybe this does not work theoretically but practically it looks quite good to my taste. in this example, I used shalm to install kyma and cf-for-k8s on a cluster.

achimweigel commented 1 year ago

@In-Ko closed because outdated and/or copied into internal project