Open Rhywden opened 6 months ago
I'll put a few examples together tonight
I for one would like to see some additional documentation. Generating the manifests seems to work, but if I want to customize, it's incredibly painful. I see it generating kustomization.yml, but if I try to generate the manifests again, they'll just overwrite the existing kustomization.yml files.
One thing I'd like to do, is add health checks, or modify the yaml. I'd like a clear cut example of how I can generate the manifests, add some kustomize changes, and then when I add new services, or alter the configuration by adding new references, how I can retain these health checks.
Yes docs will be updated with examples later this week
Aspirate is additive by nature in terms of what you select in the component list during running it. So if you have three services named ServiceA, ServiceB, ServiceC, and run generate, when prompted select ServiceA Only
Next run, Select ServiceB only - Any changes you have made to the manifests for ServiceA will not be lost etc Only when you reselect a component will it overwrite the changes. When using this way, you will have to manually manage the main top level kustomization.yml file to add the generated directories under the resources key
It uses kustomize for deployments. The idea being, kustomize is extensible by nature.
https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/
For instance, with the aspire-starter template, You could manually create a patches directory containing two files,
apiservice.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: apiservice
spec:
replicas: 10
minReadySeconds: 120
template:
spec:
containers:
- name: apiservice
resources:
requests:
cpu: "500m"
webfrontend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webfrontend
spec:
template:
spec:
containers:
- name: webfrontend
resources:
limits:
cpu: "1"
memory: "500Mi"
Then in the top level kustomization.yml file, add in a patches section:
patches:
- path: patches/apiservice.yaml
target:
group: apps
version: v1
kind: Deployment
name: apiservice
- path: patches/webfrontend.yaml
target:
group: apps
version: v1
kind: Deployment
name: webfrontend
When you run kustomize build .
, or aspirate apply, you will see the resources are patched as if they contained the patches directly
apiVersion: v1
data:
ASPNETCORE_FORWARDEDHEADERS_ENABLED: "true"
ASPNETCORE_URLS: http://+:8080;
OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EVENT_LOG_ATTRIBUTES: "true"
OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EXCEPTION_LOG_ATTRIBUTES: "true"
OTEL_DOTNET_EXPERIMENTAL_OTLP_RETRY: in_memory
OTEL_EXPORTER_OTLP_ENDPOINT: http://aspire-dashboard:4317
kind: ConfigMap
metadata:
name: apiservice-env
---
apiVersion: v1
data:
ASPNETCORE_FORWARDEDHEADERS_ENABLED: "true"
ASPNETCORE_URLS: http://+:8080;
OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EVENT_LOG_ATTRIBUTES: "true"
OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EXCEPTION_LOG_ATTRIBUTES: "true"
OTEL_DOTNET_EXPERIMENTAL_OTLP_RETRY: in_memory
OTEL_EXPORTER_OTLP_ENDPOINT: http://aspire-dashboard:4317
services__apiservice__http__0: http://apiservice:8080
kind: ConfigMap
metadata:
name: webfrontend-env
---
apiVersion: v1
kind: Service
metadata:
name: apiservice
spec:
ports:
- name: http
port: 8080
targetPort: 8080
- name: https
port: 8443
targetPort: 8443
selector:
app: apiservice
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: aspire-dashboard
spec:
ports:
- name: dashboard-ui
port: 18888
protocol: TCP
targetPort: 18888
- name: otlp
port: 4317
protocol: TCP
targetPort: 18889
selector:
app: aspire-dashboard
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: webfrontend
spec:
ports:
- name: http
port: 8080
targetPort: 8080
- name: https
port: 8443
targetPort: 8443
selector:
app: webfrontend
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: apiservice
name: apiservice
spec:
minReadySeconds: 120
replicas: 10
selector:
matchLabels:
app: apiservice
strategy:
type: Recreate
template:
metadata:
labels:
app: apiservice
spec:
containers:
- envFrom:
- configMapRef:
name: apiservice-env
image: apiservice:latest
imagePullPolicy: IfNotPresent
name: apiservice
ports:
- containerPort: 8080
- containerPort: 8443
resources:
requests:
cpu: 500m
terminationGracePeriodSeconds: 180
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: aspire-dashboard
name: aspire-dashboard
spec:
replicas: 1
selector:
matchLabels:
app: aspire-dashboard
template:
metadata:
labels:
app: aspire-dashboard
spec:
containers:
- env:
- name: DOTNET_DASHBOARD_UNSECURED_ALLOW_ANONYMOUS
value: "true"
image: mcr.microsoft.com/dotnet/nightly/aspire-dashboard:8.0.0-preview.6
livenessProbe:
httpGet:
path: /
port: 18888
initialDelaySeconds: 30
periodSeconds: 10
name: aspire-dashboard
ports:
- containerPort: 18888
name: dashboard-ui
- containerPort: 18889
name: otlp
readinessProbe:
httpGet:
path: /
port: 18888
initialDelaySeconds: 30
periodSeconds: 10
resources:
limits:
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
terminationGracePeriodSeconds: 30
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webfrontend
name: webfrontend
spec:
minReadySeconds: 60
replicas: 1
selector:
matchLabels:
app: webfrontend
strategy:
type: Recreate
template:
metadata:
labels:
app: webfrontend
spec:
containers:
- envFrom:
- configMapRef:
name: webfrontend-env
image: webfrontend:latest
imagePullPolicy: IfNotPresent
name: webfrontend
ports:
- containerPort: 8080
- containerPort: 8443
resources:
limits:
cpu: "1"
memory: 500Mi
terminationGracePeriodSeconds: 180
From the above you can see my patch of 10 replicas, and 120 second grace period has been applied, and resource limits now exist etc
Utilising patches like this mean you can regenerate the resources as many times as you need to during development and your custom changes wont be lost on the resources themselves
The only thing you will have to be careful of is management of the kustomization.yaml (top level prompt just before do you want to generate a helm chart) as saying yes to that will overwrite the file meaning you will have to add the patches:
node again, but your individual patch files wont be lost.
You can add whatever you want in the patches, including health checks, tolerations etc - the base deployment resources will be merged with it additively, replacing any values that already exist with whats in the patch, or adding values that aren't
@prom3theu5
One thing that is not clear to me is. Is Aspirate meant to be used from CI/CD
pipeline (GH Action)?
For example:
having patch directory with all overlays for deployment like dev
and prod
and on each push let pipeline calls aspirate generate
, than apply those patches to aspirate-output
which will be then deployed to the Kubernetes cluster?
@vlachdev
Your example is one definite use case π
The way I see it:
aspirate run
which will directly create the resources directly in your selected cluster without outputting any manifests. Useful for quick debugging etc.aspirate build
can be used to build all your dockerfiles and projects as containers. Whatever options, including image tags etc that are stored in the aspirate-state.json file will be used, as the state is shared across all commands etc.Both scenarios are valid from a usability point of view. You'd most probably use local until you are sure on your deployment configuration, then use the committed aspirate-state.json file to mirror the selected options in your CI/CD pipeline.
You can now also maintain different feature sets within the same app host if you want to incrementally deploy using launch profiles in the app host project, and passing the --launch-profile
argument to aspirate
Another example of a way to manage this would be to use --launch-profiles, combined with --output-path to output separate sets of kustomize manifests (its actually the way im using it on one project)
You'd output them to a common place each time.
Lets call this $DEV_HOME
So if you had a launch profile called prometheus
and another called poseidon
you could output them both like this:
mkdir -p $DEV_HOME/base || true
aspirate generate --launch-profile prometheus --output-path $DEV_HOME/base/prometheus
aspirate generate --launch-profile poseidon --output-path $DEV_HOME/base/poseidon
then inside the $DEV_HOME directory you could create a kustomization.yaml file with
bases:
- ./base/prometheus
- ./base/poseidon
patchesStrategicMerge:
- ./patches/some_local_patches.yaml
And you could create the patches file etc
Then running kustomize build $DEV_HOME
you'd get all your manifests generated and patched with your patches.
You can run each of the aspirate generate commands as many times as you want with the same launch-profiles and output paths, and each time you'd be building the parent directory with your patches etc
Also - launch-profiles dont need to be used, if you just have one output folder, just dont include two etc
The main docs site is here btw: https://prom3theu5.github.io/aspirational-manifests/getting-started.html
@prom3theu5
Thank you for your exhausting reply. I have manage it to deploy via Kustomize to Azure cluster earlier as i describe above, just want to make sure I'm using Aspirate as intended π
I thought I should mention that Rancher is not supported by Aspire. Rancher mostly works with Aspire, but not entirely. Probably best not to point Aspir8 users at it (even though it has nice port forwarding features)
https://learn.microsoft.com/dotnet/aspire/fundamentals/setup-tooling#container-runtime
π Feature Description
As a newbie to Aspire, I'd like to have a manual / tutorial / walkthrough on how to actually meaningfully deploy this whole thing.
β Goals
The thing is that I can create an Aspire project in development just fine. The problems then comes from the lack of documentation or at least pointers for beginners on how to actually achieve a deployment because a lot of steps seem to assume that you're already intimately familiar with the jargon.
Case in point: I create a simple Blazor web client + an API project behind that. It's what you get when you open Visual Studio 2022 Preview and choose the ".NET Aspire Starter Application".
I then used the various "aspire init / generate / ..." commands using the default answers to the questions and was able to push the whole thing into my local Kubernetes. Of course then I ran into the problem that various ports were not forwarded. Thankfully SUSE Rancher Desktop allows to forward those after the fact.
Which then resulted in half-working applications because none of the containers were able to talk to each other.
Which in turn leaves me confused because I thought the whole point of Aspire was to be able to define communication pathways / dependencies between the services programmatically and make service discovery "just work".