canonical / multipass-blueprints

Blueprint definitions for [`multipass launch`](https://multipass.run)
GNU General Public License v3.0
68 stars 42 forks source link

Add rockcraft to the charm-dev blueprint #45

Closed weiiwang01 closed 5 months ago

weiiwang01 commented 6 months ago

As rockcraft has become the recommended method for building OCI images in charms, it would be good to pre-install rockcraft in the multipass blueprint for charm development as well,

townsend2010 commented 6 months ago

Hey @sed-i!

As you are really the main maintainer of this Blueprint, are you good with this addition?

Thanks!

sed-i commented 6 months ago

Hi @weiiwang01, My thinking was that local rock building is rare, so I hoped to include it in the docker blueprint instead: https://github.com/canonical/multipass-blueprints/pull/31. @townsend2010 suggested there to have a separate blueprint for rockcraft.

Seems healthy so separate charm-dev from rockcraft. If we include rockcraft then snapcraft also make sense because machine charms are all about snaps.

Is there a particular reason you'd like this as part of charm-dev?

weiiwang01 commented 6 months ago

Hi @weiiwang01, My thinking was that local rock building is rare, so I hoped to include it in the docker blueprint instead: #31. @townsend2010 suggested there to have a separate blueprint for rockcraft.

Seems healthy so separate charm-dev from rockcraft. If we include rockcraft then snapcraft also make sense because machine charms are all about snaps.

Is there a particular reason you'd like this as part of charm-dev?

Yes, I understand the concerns about locally built rocks, but I believe the purpose of the multipass blueprint is primarily for learning and prototyping during charm development. At that stage, using rockcraft to build rocks locally for verification and testing may be necessary. As for snapcraft, I think we can add it if there's a need in the future. For now, we need to publish a new juju tutorial that uses rockcraft in the multipass blueprint.

townsend2010 commented 6 months ago

Hi!

Let me ask this- is everything that is already included in the charm-dev Blueprint relevant to what is needed for rockcraft and creating OCI images for Charms? If so, then it's probably OK to just add rockcraft like what's proposed here. If rockcraft doesn't need a significant portion of what the charm-dev Blueprint bootstraps, then I say we just create a new rockcraft-dev Blueprint.

townsend2010 commented 6 months ago

And regarding snapcraft, snapcraft already uses Multipass as a provider, so I don't see a reason to ever need snapcraft in a Blueprint.

sed-i commented 6 months ago

is everything that is already included in the charm-dev Blueprint relevant to what is needed for rockcraft and creating OCI images for Charms?

My understanding is that rockcraft-blueprint = (docker + skopeo + dive) + rockcraft ≈ (docker blueprint) + rockcraft (lxd comes for free). I.e. almost nothing in the charm-dev blueprint is needed for packing rocks.

weiiwang01 commented 6 months ago

The benefit of including rockcraft in the same multipass blueprint is that if users want to develop or prototype a charm with a rock simultaneously, they don't have to switch between two different multipass instances. This process is more simplified and more efficient if all toolchains exist within the same multipass instance.

townsend2010 commented 6 months ago

Ok, I'm really unsure of what the end goal is here. It sounds to me we need more than just he rockcraft snap added here, ie, we need docker, skopeo, etc. as well. I really think we need a meeting to work out the best way to handle this.

@weiiwang01 Could you please work on setting up a meeting with your side, @sed-i, and myself? Thanks!

weiiwang01 commented 6 months ago

Ok, I'm really unsure of what the end goal is here. It sounds to me we need more than just he rockcraft snap added here, ie, we need docker, skopeo, etc. as well. I really think we need a meeting to work out the best way to handle this.

@weiiwang01 Could you please work on setting up a meeting with your side, @sed-i, and myself? Thanks!

Gotcha, invitation sent, thanks!

sed-i commented 6 months ago

For future reference:

docker blueprint <- overlap -> rockcraft blueprint <- overlap -> charm-dev blueprint
docker-compose, containerd docker, skopeo, dive rockcraft, firewall rules charmcraft juju, microk8s, some tools
townsend2010 commented 6 months ago

Hey @sed-i!

Nice, thank you for the summary!

weiiwang01 commented 6 months ago

@townsend2010 @sed-i So, I have run a typical workflow for developing both charm and rock in the same VM, and it's a little concerning because, in the end, it uses the entire default disk space (30 GB), causing processes to fail. The disk space consumption by rockcraft is pretty large. I was wondering if we should increase the min-disk requirement along with the introduction of rockcraft, or if we should consider other approaches considering this fact.

So here's the workflow:

$ git clone https://github.com/canonical/discourse-k8s-operator.git
$ cd didiscourse-k8s-operator
$ charmcraft pack
$ cd discourse_rock/
$ rockcraft pack
$ /snap/rockcraft/current/bin/skopeo copy --insecure-policy --dest-tls-verify=false oci-archive:discourse_1.0_amd64.rock docker://localhost:32000/discourse:1.0
$ juju deploy redis-k8s
$ juju deploy postgresql-k8s --channel 14/stable --trust
$ juju deploy ./discourse-k8s_ubuntu-20.04-amd64.charm --resource discourse-image=localhost:32000/discourse:1.0
Total Used Free Consumption
Initial space 29G 9.1G 20G -
After charmcraft pack 29G 12G 18G 2.9G
After rockcraft pack 29G 22G 7.4G 10G
After skopeo copy 29G 23G 6.4G 1G
After juju deploy redis postgresql discourse (crashed) 29G 29G 839M 6G+
townsend2010 commented 6 months ago

Hi @weiiwang01!

IMO, it's fine to increase the minimum disk size, but how much do you think is needed for most/all cases here? This was certainly a concern of @sed-i.

sed-i commented 6 months ago

So the 30G is for trying out hello-kubecon and stuff like that. When I launch charm-dev locally, I always override the default with 50G, and that's without rockcraft. Even then, charmcraft quickly fills up the disk (related: https://github.com/canonical/charmcraft/issues/1042).

We could:

  1. Update the default to 50G; and
  2. Add a comment to the blueprint that rockcraft users would need to override; and
  3. In the rockcraft related docs, always provide a launch command with --disk of... 75G?
weiiwang01 commented 6 months ago

So the 30G is for trying out hello-kubecon and stuff like that. When I launch charm-dev locally, I always override the default with 50G, and that's without rockcraft. Even then, charmcraft quickly fills up the disk (related: canonical/charmcraft#1042).

We could:

  1. Update the default to 50G; and
  2. Add a comment to the blueprint that rockcraft users would need to override; and
  3. In the rockcraft related docs, always provide a launch command with --disk of... 75G?

Okay, if the 30GB minimum is for a basic hello-world application, I tried a minimal rock (installing nginx), showed that the disk space usage for a minimal rock was approximately 2.7GB. So, keeping the 30GB minimal seems reasonable for adding a minimal rock. For practical applications, manually adjusting the limit to 50GB should be adequately for development of a single charm and a single rock at the same time.

townsend2010 commented 6 months ago

One thing to keep in mind is that the virtual disk images will only increase host disk usage as the virtual disk fills up. So even if 50GB is allocated to the VM, it will use less host disk space until the virtual disk is actually filling up. I think we should bump it 50GB to cover the most common use cases here.

weiiwang01 commented 6 months ago

One thing to keep in mind is that the virtual disk images will only increase host disk usage as the virtual disk fills up. So even if 50GB is allocated to the VM, it will use less host disk space until the virtual disk is actually filling up. I think we should bump it 50GB to cover the most common use cases here.

Gotcha, thanks! I have bumped it to 50G.