Closed phil-abb closed 3 months ago
The committers listed above are authorized under a signed CLA.
An alternative approach for the sources was suggested to group the sources by "helm" and "docker".
sources:
helm:
- name: digitron-orchestrator
type: helm.v3
properties:
repository: oci://northstarida.azurecr.io/charts/northstarida-digitron-orchestrator
revision: 1.0.9
wait: true
- name: database-services
type: helm.v3
properties:
repository: oci://quay.io/charts/realtime-database-services
revision: 2.3.7
wait: true
docker:
- name: digitron-orchestrator-docker
type: docker-compose
properties:
packageLocation: https://northsitarida.com/digitron/docker/digitron-orchestrator.tar.gz
keyLocation: https://northsitarida.com/digitron/docker/public-key.asc
@phil-abb, would you mind updating the commits to include a Signed-off-by:
line as can be seen in the contributing guidelines and probably squash them?
@phil-abb, would you mind updating the commits to include a
Signed-off-by:
line as can be seen in the contributing guidelines and probably squash them?
@stormc I enabled gpg signing - I think it's right but please let me know if it's not done correctly.
Small change proposal to the "sources" section:
Currently, helm and docker have "properties" key:
sources:
helm:
- name: hello-world
type: helm.v3
properties:
repository: oci://northstarida.azurecr.io/charts/hello-world
revision: 1.0.1
wait: true
docker:
- name: digitron-orchestrator-docker
type: docker-compose
properties:
packageLocation: https://northsitarida.com/digitron/docker/digitron-orchestrator.tar.gz
keyLocation: https://northsitarida.com/digitron/docker/public-key.asc
As the content under "properties" is specific for helm and docker, I suggest to remove the "properties" key and put the contents directly under the respective parent elements.
As the content under "properties" is specific for helm and docker, I suggest removing the "properties" key and putting the contents directly under the respective parent elements.
From a developer's perspective having it split out like this makes more sense because the "name", "type" and "properties" are common so you can use a common model for that part. It also provides a bit more organization/clarity in my mind.
What do others think?
As you specified it: "Properties for sources using Helm" and "Properties for sources using docker-compose", developers will anyways have to define two classes to model the properties. Hence you might as well include those contents in the upper section...
As you specified it: "Properties for sources using Helm" and "Properties for sources using docker-compose", developers will anyways have to define two classes to model the properties. Hence you might as well include those contents in the upper section...
Here is an example of how you can serialize/deserialize the sources with C# without having to create special objects for docker-compose and yaml.
using YamlDotNet.Serialization;
internal class Program
{
private static void Main(string[] args)
{
var yamlInput = File.ReadAllText("example-input.yaml");
var deserialize = new DeserializerBuilder().Build();
var exampleSources = deserialize.Deserialize<SourcesRoot>(yamlInput);
var serializer = new SerializerBuilder().Build();
var yaml = serializer.Serialize(exampleSources);
Console.WriteLine(yaml);
}
}
public class SourcesRoot
{
[YamlMember(Alias = "sources")]
public Dictionary<string, List<Source>> Sources {get; set;}
}
public class Source
{
[YamlMember(Alias = "name")]
public string Name {get; set;}
[YamlMember(Alias = "type")]
public string Type {get; set;}
[YamlMember(Alias = "properties")]
public Dictionary<string, string> Properties {get; set;}
}
example-input.yaml
sources:
kubernetes:
- name: digitron-orchestrator
type: helm.v3
properties:
repository: oci://northstarida.azurecr.io/charts/northstarida-digitron-orchestrator
revision: 1.0.9
wait: true
- name: database-services
type: helm.v3
properties:
repository: oci://quay.io/charts/realtime-database-services
revision: 2.3.7
wait: true
moby:
- name: digitron-orchestrator-docker
type: docker-compose
properties:
packageLocation: https://northsitarida.com/digitron/docker/digitron-orchestrator.tar.gz
keyLocation: https://northsitarida.com/digitron/docker/public-key.asc
We agreed to change "kubernetes" to "cluster" and "moby" to "stand-alone"
@arne-broering I'll make these updates on my branch.
@Haishi2016 I've also been discussing some changes to this with @arne-broering. I've started with some initial changes in this PR. The full proposal is here since the PR is for only the first set of changes.
Maybe we can sync up on these items because I think we have similar thoughts on a lot of these based on your proposal.
It's unclear to me the relationship between properties and configurations. It seems a configuration refers to a property. If they are linked, what's the point of separating them out? I think "properties" already represent something that are "configurable"?
I don't feel minimumResourceRequirements makes sense at this level. Resource requirements are mostly associated at pod level on K8s, and when user declares their Helm chart they can express resource requirements in pod specs. It's hard/impossible to reinforce this minimumResourceRequirements at application definition level as an app may contain multiple charts and each chart may contain multiple pods. I think in general we should define "just enough" for interoperability goals instead of trying to surface too much details.
It's unclear to me the relationship between properties and configurations. It seems a configuration refers to a property. If they are linked, what's the point of separating them out? I think "properties" already represent something that is "configurable"?
The proposed changes to the properties/parameters section aren't part of this PR. I responded on the other issue here
A couple of other comments:
I don't feel minimumResourceRequirements makes sense at this level...
The proposed changes to the minimumResourceRequirements section aren't part of this PR. We talked about this on Wednesday and decided to defer this section a bit and try to align it more closely to what we have in the device description document. We can discuss it more when we get back to it.
The idea for this section is to make it easier for the WOS to provide details about the application's minimum requirements in the application catalog (or marketplace when we get to that point). I think it makes sense for the WOS to be able to pair this information up with information in the device description document and observability data to be able to help a customer determine which device the application would be capable of running on if the WOS vendor chooses to implement the functionality. As far as I'm aware it is one of the Margo requirements to make this possible if the WOS vendor chooses to do it (I could be wrong on this though).
I don't like the idea of forcing the WOS vendor to interrogate YAML files, docker-compose files, and whatever else comes next to try to deduce this information. It will be very difficult, if not impossible unless we are very prescriptive about how this information is defined in those files.
For example, I can think of three different ways this information could be defined in a helm chart. 1. The helm chart defines deployments or pod resources directly and specifies the resource requirements directly on those. 2. The helm chart uses templates and defines these resource requirements somewhere in one of the values.yaml files. 3. The application uses CRDs to create the deployment or pod and it uses its own way of defining these resource requirements that the CRD uses.
- I feel "components" is a better name than "sources". Especially, when you have "sources" in "targets" it's kind of confusing.
I'm ok with this. @arne-broering, @ajcraig what do you think?
- The use of different casing styles (e.g., camelCase for minimumResourceRequirements, kebab-case for ApplicationDescription, and snake_case for resource names) can lead to confusion and inconsistency.
Good point. We should standardize this across our entire specification.
@Haishi2016 @arne-broering @ajcraig Do any of you have recommendations/preferences for what we should use?
- The use of different casing styles (e.g., camelCase for minimumResourceRequirements, kebab-case for ApplicationDescription, and snake_case for resource names) can lead to confusion and inconsistency.
Good point. We should standardize this across our entire specification.
@Haishi2016 @arne-broering @ajcraig Do any of you have recommendations/preferences for what we should use?
I thought we were already using camelCase convention, kind of aligning with kubernetes. It could be that there are some mistakes to this in our current spec / examples, which we should definitely fix and make consistent.
I don't feel minimumResourceRequirements makes sense at this level...
The proposed changes to the minimumResourceRequirements section aren't part of this PR. We talked about this on Wednesday and decided to defer this section a bit and try to align it more closely to what we have in the device description document. We can discuss it more when we get back to it.
The idea for this section is to make it easier for the WOS to provide details about the application's minimum requirements in the application catalog (or marketplace when we get to that point). I think it makes sense for the WOS to be able to pair this information up with information in the device description document and observability data to be able to help a customer determine which device the application would be capable of running on if the WOS vendor chooses to implement the functionality. As far as I'm aware it is one of the Margo requirements to make this possible if the WOS vendor chooses to do it (I could be wrong on this though).
I don't like the idea of forcing the WOS vendor to interrogate YAML files, docker-compose files, and whatever else comes next to try to deduce this information. It will be very difficult, if not impossible unless we are very prescriptive about how this information is defined in those files.
For example, I can think of three different ways this information could be defined in a helm chart. 1. The helm chart defines deployments or pod resources directly and specifies the resource requirements directly on those. 2. The helm chart uses templates and defines these resource requirements somewhere in one of the values.yaml files. 3. The application uses CRDs to create the deployment or pod and it uses its own way of defining these resource requirements that the CRD uses.
I think both your points (@phil-abb and @Haishi2016) are valid. What about bringing the mimimumResourceRequirements under each "component" (aka "source")? So, specifying these required resources per referenced Helm chart of docker compose?
I don't feel minimumResourceRequirements makes sense at this level...
The proposed changes to the minimumResourceRequirements section aren't part of this PR. We talked about this on Wednesday and decided to defer this section a bit and try to align it more closely to what we have in the device description document. We can discuss it more when we get back to it. The idea for this section is to make it easier for the WOS to provide details about the application's minimum requirements in the application catalog (or marketplace when we get to that point). I think it makes sense for the WOS to be able to pair this information up with information in the device description document and observability data to be able to help a customer determine which device the application would be capable of running on if the WOS vendor chooses to implement the functionality. As far as I'm aware it is one of the Margo requirements to make this possible if the WOS vendor chooses to do it (I could be wrong on this though). I don't like the idea of forcing the WOS vendor to interrogate YAML files, docker-compose files, and whatever else comes next to try to deduce this information. It will be very difficult, if not impossible unless we are very prescriptive about how this information is defined in those files. For example, I can think of three different ways this information could be defined in a helm chart. 1. The helm chart defines deployments or pod resources directly and specifies the resource requirements directly on those. 2. The helm chart uses templates and defines these resource requirements somewhere in one of the values.yaml files. 3. The application uses CRDs to create the deployment or pod and it uses its own way of defining these resource requirements that the CRD uses.
I think both your points (@phil-abb and @Haishi2016) are valid. What about bringing the mimimumResourceRequirements under each "component" (aka "source")? So, specifying these required resources per referenced Helm chart of docker compose?
This seems to make a lot of sense to ensure the WOS is aware of what is required for the apps. Additionally, an application creator might have different minimum resources depending on the source, cluster or standalone. FYI: I'm keeping track of this PR to ensure I am caught up with our deployment specification.
I don't feel minimumResourceRequirements makes sense at this level...
@Haishi2016, @arne-broering, @ajcraig - I moved these discussions to this issue since this PR isn't adding the resource requirements section.
- I feel "components" is a better name than "sources". Especially, when you have "sources" in "targets" it's kind of confusing.
I'm ok with this. @arne-broering, @ajcraig what do you think?
Updates to use "components" instead.
sources
with atype
instead of having specifichelm-chart
anddocker-compose
sectionswait
property, iftrue
, instructs the device to wait for that helm chart to be installed before continuing to the next chartfalse
This PR is for the first set of the proposed changes. You can see the full proposal here
UPDATE: The proposed changes for how parameters/properties are handled is now under this issue