Open shykes opened 1 month ago
I agree, the learning curve for dagger adoption is a bit more than it looks at the surface.
I do have a comment in the direction of doing two things/having two things pursuing the same goal.
From my experience with other tools in that space, e.b. GH Action, GitLabCI, both offer different ways to construct a workflow file. While both files end up doing the same thing, one looks entirely different from the other. One is definitely smaller and nicer. However, and here is my big pay attention mark!
I am a big fan of RTFM-ability and Googleability when building and consuming tools, frameworks, and concepts. The outcome of doing the same thing in multiple ways does not help in that regard. I believe it is counterproductive in the long term, because of the ambiguity of resources you find online. GitHub and GitLabCI are good examples. (Not so much for Dockerfile and compose, as their complexity evolved but their ambiguity)
Adoption and usage of AI is a much bigger factor, where I think flattening the learning curve with something simpler will not make a dent in adoption. Many devs (>90%) nowadays use AI tools for software development. Doing the same thing in different ways will not help here. For faster and easier adoption of Harbor, I, personally, would recommend to tune, generating and assembling content for AI or a Dagger RAG.
IMHO, https://www.pulumi.com/ai/ is doing an outstanding job, helping developers adopt Pulumi. I frequently use it, and I know others who do the same.
So my TLDR, would be helping devs, with more examples and AI-ability will be more effective for Daggers adoption. .
Big complement to the Dagger Cookbook, that really helped me understand how to user Dagger.
An isolated comment on this bit:
I think in practice, that requires containerizing the dev environment, and managing it in Dagger. IDE integration would then involve a devcontainer bridge, or equivalent. That makes IDE integration harder to ship, but we will need to cross that bridge eventually: the tech industry is already gravitating towards containerized dev environments, it's kind of silly that one of the most advanced container-based platforms out there, has no plan for that - not only for its own platform, which literally can only run in containers.
I'd rather this wasn't forced, but would be fine with it being optional (and easy to demonstrate the simplicity/effectiveness of!). Devcontainers might be growing up popularity (for good reason), but they're unheard of and unused in many areas. I'm not saying this isn't a good idea, just that adopting new tools comes with organisational challenges and asking organisations to adopt devcontainers at the same time might be a little too much.
Seamless IDE integration
I think that to make single-file modules a viable option as the default, while we don't need to be seamless, I think we should aim to be at near as good as a standard setup today. Otherwise, I suspect most users would quickly bail out. That said, if we had this, I would switch over to this instantly, I would love to be rid of the clutter we currently have.
From my memory, IDE integration these days tends to involve:
Even if we use containerized dev environments, I think we still end up needing to do some of the above, since those are the integration points for IDEs. Thankfully, because of treesitter/lsps, we hopefully would be able to re-use most of the work if we support multiple IDEs.
An incremental approach, and depending on what exactly the format of single-file modules looks like mean we could just start with the simplest at the top (syntax highlighting), and move into the more complex bits later on.
In terms of how we might do LSPs... we could do something like this: https://code.visualstudio.com/api/language-extensions/embedded-languages
This would likely be a chunk of work, but potentially we could get pretty good results (might be worth prototyping something). I was looking into prior work:
We've also just talked about this in the context of wanting to avoid needing to have constant dagger.gen.go
regeneration for IDE support - if we could make our own language servers that wrap go/python/typescript/etc, we'd also hugely simplify that.
If feels like at some point we're going to want to have magical wrapping language servers at some point in the future - I think we should prototype something here to see how feasible this would be.
Speaking from the POV of Python modules here.
I think we should address the fact that our current IDE support is already not great, and we don't have a clear plan for addressing that. In fact, I don't think we even have a clear consensus on how we want to integrate with IDEs.
My approach has been to make modules normal Python packages, using conventions and standards as much as possible. I think this has the best chance of working as expected in several IDEs, without much effort from us. At least the Python SDK doesn’t cater to any one IDE specifically. And it doesn’t need to.
When you open a PR replacing a 10-line Dockerfile with 50-file PR including a 100-line go.sum, that makes Dagger seem complex and intimidating, no matter how smooth the IDE integration.
Yeah, that’s a good point, but it’s also an exaggeration (to prove a point, I know). It’s always the same question that I don’t have enough data to know if the single 10-line Dockerfile
(or equivalent) is the most common case or not. In any case, there are things that seem unavoidable, just because you’re using a programming language.
One SDK could be generating more files than another, for example, so it could just be a matter of choice between using the language that you love (and accept), or using something that’s simpler, since you can have an SDK that only needs one file (e.g., a configuration language).
I get that the amount of files that’s generated seems intimidating, and reducing to one file as much as possible would look simpler, but if you’re developing the module, and creating functions, I think you’ll end up having a worse experience.
That’s why we have .gitignore
. If you’re developing, you run dagger develop
to get the code locally, which helps with IDE integration. If not, then those files aren’t generated anyway.
But if users get a single-file by default, I think they’d want IDE integration pretty often, requiring an eject, which adds a lot of friction. I’m not sure about the feasibility of devcontainers to bridge the gap here.
I’ve said this before, but my intention with the Python SDK was to generate a minimal version of the client library for modules (like Go), but was pressed for time so decided to just vendor it all, and come back to slim it later. This has always been on my list, but have since adjusted that plan to make room for future things (like depending on modules from dagger run
). That’s much easier to adjust in Python than Go today because of the vendoring.
What I want to say is that we can slim it down more, we just haven’t prioritized it. For example, I want to decouple the library’s components enough so that modules don’t have the provisioning code, and the code that supports running modules doesn’t need to be generated locally, nor published publicly. It’s only the client bindings that we need patching.
We could exclude the module support stuff from being published to the registry, use that as a dependency in the module (offloading all that code), and just patch in the bindings. So there’s ways to optimize this in the future.
However, what about the 100-line go.sum? If you have a single-file module, how else do you pin your dependencies? Because if you don’t, running the same module could easily break next week.
Simplicity. developing your Dagger module should involve as few new files and concepts as possible.
So what’s the bare minimum that I can do in Python for a single-file module?
main.py
uv.lock
dagger.json
If you need IDE integration, these files are useful to commit (so that generated files don’t have to):
.gitignore
.gitattributes
(just helps with linguist in GitHub, not absolutely required)And if you want to split into more files, you should move to a package:
pyproject.toml
main.py
→ src/my_module/main.py
src/my_module/__init__.py
(not absolutely required, just best practice)However, having a package is a better default because we’ve learned that it’s a bigger ask for people to know how to move to that properly.
Which means that ejecting a single-file Python module essentially only needs to add a .gitignore
and a pyproject.toml
. Two more for best practices. And the generated ones are optional, and aren’t available in a single-file module anyway.
So it doesn’t seem worth it to me, and only adds complexity to the learning curve for having this difference. But I’m open to a dagger init --sdk=python-single
that users can opt-in to.
.gitignore
For example, we have already made the IDE integration a little worse, for the sake of simplicity, when we added generated boilerplate to
.gitignore
instead of committing them. I remember we flip-flopped a lot on this point.
I don’t remember the flip-flop, but I do remember pushing for it. I really didn’t want to commit generated files and still don’t. It happens with some Go modules still because of the linter, but that’s not an issue with Python.
I don’t think it’s too much to require a dagger develop
. You’ll only need that generated code if you’re authoring in an IDE. If you’d commit them and you just need to run, there’s more files lying around in all cases. So this just adds one extra .gitignore
and a .gitattributes
to the module’s committed code.
My personal opinion is that "seamless IDE integration" is not the right goal. It's great in theory, but it forces us to sacrifice too much simplicity.
My take is that the difference between the simplicity without IDE integration, and with it (considering what’s possible, not necessarily how it looks today), is not that big, and adds complexity when the former is no longer enough and the user needs to eject.
Magic feels nice but hides things and requires more custom documentation to explain.
I think we have a bigger learning curve when there’s limitations and daggerisms on what you can do in your language of choice, in order to fit with our common API. Not with module file structure, especially if it looks like a normal language package.
Especially when take into account fragmented ecosystems like Python: seamless for whom? Every Python developer? Does that mean we have to seamlessly integrate every known permutation of dependency management and language server support in Python tooling? Good luck with that!
Good thing we have Astral 😄 This is where the Python SDK strikes a balance. It only supports the uv
package manager (uv.lock
). uv
is pretty great, we don’t need to support more. If you’re on a system without any Python tooling, all you need is to download this single binary once, and run uv run vim
in your module to set it all up for you and get code completions.
There’s great flexibility in how to structure a Python module, even more with https://github.com/dagger/dagger/issues/8535 around the corner. But that’s not because we have a lot of code to handle that, it’s because we’re using standards and it’s the Python standards that provide the flexibility. What this means is usually a few settings in pyproject.toml
and you can do quite a lot to fit your use case, without Dagger having to do anything about it explicitly.
As configurability goes, we have the following to help with reproducibility:
uv
so it matches the one you use locallyHowever, the SDK module also offers a few escape hatches for some advanced use cases:
uv
and revert to plain old pip
for installing your module’s dependenciesrequirements.lock
file and the SDK will fall back to that file if there’s no uv.lock
. Otherwise Poetry users don’t get pinned dependencies, but Dagger supports a Poetry specific pyproject.toml
out of the box (as well as from other tools), not because it installs the tool, but because of the standard (specifically build backends in this case)These are easy things to support though! So I’m able to strike quite a good balance with Python, I think.
But the same may be harder to do in TypeScript. Not only in regards to the package managers, but also the different runtimes. So your argument is still valid in that ecosystem. I agree we shouldn’t try to support everything. Each SDK needs to strike the right balance on what it can reasonably support.
I think in practice, that requires containerizing the dev environment, and managing it in Dagger. IDE integration would then involve a devcontainer bridge, or equivalent. That makes IDE integration harder to ship, but we will need to cross that bridge eventually: the tech industry is already gravitating towards containerized dev environments, it's kind of silly that one of the most advanced container-based platforms out there, has no plan for that - not only for its own platform, which literally can only run in containers.
I think it’s nice to support that as an opt-in, but not as the default for proper IDE support. Maybe at some point it’ll become clear that’s a viable option, but I have my doubts. Not sure how that technology has evolved and how accessible it is these days.
Don’t get me wrong, I kind of love the idea of a single-file module except for IDE integration in my case, and more generally for the reasons I listed when it relates to our users.
In any case, as I said, I think we can optimize the current situation further on the amount that is generated since there’s room for improvement there.
Beyond any implementation details, could we consider dagger.json
+
// main.dagger.go
package main
// we could have a custom dagger block somewhere in the file, this could be language specific
dagger (
name "my-module"
engineVersion "v0.13.6"
dependency (
name "hello"
source "github.com/shykes/hello"
)
)
// the my module as normal
type MyModule struct {}
type (m *MyModule) Hello(ctx context.Context) (string, error) {
return dag.Hello().Hello()
}
I quite like the idea of a true single file module. To do this, it would likely mean some extra work to get syntax highlighting + language server stuff - but as mentioned above, I think this work is sort of unavoidable in the long term anyways.
So where would all the other files go with this idea?
dagger.json
:heavy_plus_sign: main.go
:arrow_right: dagger.main.go
dagger.gen.go
, internal/*
:boom: automatically generated cleverly in the language servergo.mod
+ go.sum
:boom: implied by the engineVersion
.gitignore
.gitattributes
:boom: no files to recordHowever, what about the 100-line go.sum? If you have a single-file module, how else do you pin your dependencies? Because if you don’t, running the same module could easily break next week.
Just to not lose my proposal in the above - I think it's worth discussing "implied by the engineVersion
", and whether that idea could go anywhere.
For our "core" dependencies, we already have these recorded in the engine - since we control the generated code, any dependencies that aren't specifically user requested can be locked using our lock file (we essentially already do this today in go - e.g. if a user module requests an old version of github.com/99designs/gqlgen
and the engine has a new version, we'll generate an update for it).
For user dependencies, the problem is a bit trickier - one option would be to automagically determine them (like in go) - and to insert version numbers. There's no go.sum
, so it would need to be continuously recomputed - which is a downside. Or we could generate just a go.sum
(with no go.mod
that users could commit themselves).
@jedevc would it be viable in the future, for each SDK to embed a custom language server? Then you could configure your IDE to call that. It would remove one load-bearing wall in this conversation: the need for checked-out files to match perfectly what a vanilla language server expects, always and without exception.
Re: user dependencies @jedevc @helderco. My assumption is that simple modules don't need a dependency file, because they only need 1) their stdlib 2) the Dagger API, 3) other Dagger modules.
But these are not a must-have to make single-file modules useful. For a v1, you just only use the stdlib. If you need more, you graduate to a full-blown module. But chances are, by then you've been able to onboard and do useful things. Personally I can't remember the last time I needed more than the stdlib in my modules (granted, Go makes it so seamless I might have done it by accident :)
The point is: I challenge the assumption that the bare minimum is 3 files @helderco. The bare minimum is one file IMO.
would it be viable in the future, for each SDK to embed a custom language server? Then you could configure your IDE to call that.
Viable, yeah. Whether they're distributed as distinct components, or bundled into an SDK as a dagger.Service
doesn't hugely matter IMO. Not to trivialize the distribution part of this, but it's definitely the easiest bit, the actual language server bits are gonna be the heavy bits.
dagger.json a universal lock file in the future
Lol, I was actually gonna comment this, and then went, huh maybe this is too far. Well, apparently not.
I really like this idea - whether it's in dagger.json
or a dagger.lock.json
, capturing the dependencies ourselves feels right. It also feels very linked to what @sipsma's been working on with having layered-package builds in the SDK - where we're starting to pull more build logic from external tools into dagger, where we have more powerful caching/parallelism.
My approach has been to make modules normal Python packages, using conventions and standards as much as possible. I think this has the best chance of working as expected in several IDEs, without much effort from us. At least the Python SDK doesn’t cater to any one IDE specifically. And it doesn’t need to.
Whatever the reason, our users often complain about issues between their modules and their IDE. I'm not judging whether it's the best it can be - I'm just saying it's objectively a cause of user problems.
Trimming down generated files
When you open a PR replacing a 10-line Dockerfile with 50-file PR including a 100-line go.sum, that makes Dagger seem complex and intimidating, no matter how smooth the IDE integration.
Yeah, that’s a good point, but it’s also an exaggeration (to prove a point, I know).
Only because we gitignore the generated files... Which we wouldn't do if we fully prioritized IDE integration. Just last week @aluzzardi was pointing out how it's confusing that a clean git clone
results in a broken IDE experience, unless you know to run dagger develop
which is very hard to discover. So arguably the current situation is a "worst of both worlds" situation, where we get neither perfect simplicity nor perfect IDE integration.
It’s always the same question that I don’t have enough data to know if the single 10-line
Dockerfile
(or equivalent) is the most common case or not. In any case, there are things that seem unavoidable, just because you’re using a programming language.
Let's ask @kpenfound @jpadams @levlaz @marcosnils fresh from their experience "daggerizing the world" :)
PS. @helderco we should daggerize a few projects together, IMO it will do wonders for aligning our expectations. It helped a lot for the group of people above!
That’s why we have
.gitignore
. If you’re developing, you rundagger develop
to get the code locally, which helps with IDE integration. If not, then those files aren’t generated anyway.
That .gitignore
is a compromise in IDE integration. All I'm doing is proposing a few more compromises in the same direction :)
But if users get a single-file by default, I think they’d want IDE integration pretty often, requiring an eject, which adds a lot of friction. I’m not sure about the feasibility of devcontainers to bridge the gap here.
Yeah I'm not sure either. And I regret mentioned "devcontainers" specifically, because I don't think we should use that tech necessarily. I used it as an example of growing acceptance of containerized dev environments, and using IDEs as frontend to them.
What I want to say is that we can slim it down more, we just haven’t prioritized it. For example, I want to decouple the library’s components enough so that modules don’t have the provisioning code, and the code that supports running modules doesn’t need to be generated locally, nor published publicly. It’s only the client bindings that we need patching.
I think we should pursue all that regardless. But I don't think it gets us below a minimum bar of complexity for day one.
Good thing we have Astral 😄 This is where the Python SDK strikes a balance. It only supports the
uv
package manager (uv.lock
).uv
is pretty great, we don’t need to support more. If you’re on a system without any Python tooling, all you need is to download this single binary once, and runuv run vim
in your module to set it all up for you and get code completions.
This is a crutch. I shouldn't have to install anything on my machine to develop a Dagger module, other than Dagger.
There’s great flexibility in how to structure a Python module, even more with #8535 around the corner.
That's not a good thing... The cult of flexibility is one reason the Python DX is bad (my opinion). The Dagger DX will never prioritize flexibility at the same level as Python, because it's unhealthy. We need a healthy amount of flexibility of course; and we need to accommodate the sensibilities of the Python community. But I understand now that we have to set limits to that accommodation.
But that’s not because we have a lot of code to handle that, it’s because we’re using standards and it’s the Python standards that provide the flexibility. What this means is usually a few settings in
pyproject.toml
and you can do quite a lot to fit your use case, without Dagger having to do anything about it explicitly.
I don't mind following Python standards when it makes sense for us to do so. But I've reached the conclusion that following all Python standards, all the time, for the sake of following them, is a non-goal.
If I need a more simple DX for Dagger in Python, and blindly following Python standards doesn't provide that, then we will try to design something better.
cc @kpenfound who I know has opinions on this topic.
However, the SDK module also offers a few escape hatches for some advanced use cases:
Escape hatches are great. Let's move more of our current DX in there :)
Devcontainers
I think in practice, that requires containerizing the dev environment, and managing it in Dagger. IDE integration would then involve a devcontainer bridge, or equivalent.
Yes, agreed. I should not have used the word "devcontainers". What you're describing is better.
I think it’s nice to support that as an opt-in, but not as the default for proper IDE support. Maybe at some point it’ll become clear that’s a viable option, but I have my doubts. Not sure how that technology has evolved and how accessible it is these days.
This may turn out to be the correct answer. But we have to push a little harder on possible designs, and make sure we exhausted all options. The current status quo of our DX is simply not good enough (I know we disagree on that).
It's worth pushing hard on IDE integration (whether via containerizing the dev env, or customizing the language server, or something else) because it could open a lot of options for us to make the DX much much simpler.
would it be viable in the future, for each SDK to embed a custom language server? Then you could configure your IDE to call that.
Viable, yeah. Whether they're distributed as distinct components, or bundled into an SDK as a
dagger.Service
doesn't hugely matter IMO. Not to trivialize the distribution part of this, but it's definitely the easiest bit, the actual language server bits are gonna be the heavy bits.
I agree, I was thinking more of the "actual language server bits" as well "how to configure the IDE to use it". Would that be realistic? For example, could each SDK ship a language server that runs inside the containerized environment with fully generated files, so that you don't need them always checked out? Would that work? Could it be applied to single-file modules also? Or would it break because, for example, IDEs would see position info in files they don't see, and freak out?
I agree, I was thinking more of the "actual language server bits" as well "how to configure the IDE to use it". Would that be realistic? For example, could each SDK ship a language server that runs inside the containerized environment with fully generated files, so that you don't need them always checked out? Would that work? Could it be applied to single-file modules also? Or would it break because, for example, IDEs would see position info in files they don't see, and freak out?
Potentially yeah - though I think this is probably a layer on top of a LSP/treesitter implementation that we'd write. I need to dive more deeply into devcontainers to understand what we'd put in this layer vs in other layers.
with fully generated files, so that you don't need them always checked out
We can actually get the language server itself to do this - this means we can get this to do it with response live from the client, an example flow for go I'm imagining:
dagger call ... sdk go lsp up
- it doesn't really matter.dagger develop
inside the language server - then we pass this modified filesystem to the underlying go language server.
dagger develop
dagger.gen.go
before forwarding the request - that way, we always get the right up-to-date contents.
Good thing we have Astral 😄 This is where the Python SDK strikes a balance. It only supports the uv package manager (uv.lock). uv is pretty great, we don’t need to support more. If you’re on a system without any Python tooling, all you need is to download this single binary once, and run uv run vim in your module to set it all up for you and get code completions.
This is a crutch. I shouldn't have to install anything on my machine to develop a Dagger module, other than Dagger.
+1. The uv run
thing is great that it makes completions actually work, however
dagger develop
Which I think leads back to @jedevc 's proposal above
@jed wait, the language server protocol handles file sync? So if we ran the language server in a container next to the generated files, and the IDE were configured to talk to that custom language server, we wouldn't have to implement our own filesync into the container? That part would already be handled by the language server protocol? Seems too good to be true 🙂
All good points.
Ah my bad, there's no sync in the language server protocol. I guess that rules out running it in an entirely separate container, some component of the LSP server will need to run on the host next to the editor.
But we can still do this: the language server just accesses the local files on the host, and can create "virtual files" in /tmp to jump to - all the generated files are still hidden from the user's current directory. This is sort of how go's LSP allows you to jump to definitions in the stdlib, even when they're not part of your project.
If we want the LSP to live inside the dagger engine then this option is a lot harder, I think that's the case for devcontainers maybe. But I don't see an issue with distributing the LSP as separate binaries (or bundled into the CLI), if we need to.
modules are powerful, but they have a steep learning curve. Going "from zero to first module" is major source of friction for adopting Dagger. I think we have 2 competing priorities: seamless IDE integration vs. simplicity.
For me, IDE integration is simplicity.
While a single file looks simple on the surface, there is a 0% chance a user can be successful without going through our docs. We can mitigate that with LSPs and tricks like that, but at the end of the day, it's non-standard magic the user will have to read docs for how to setup.
This goes back to the conversation of good docs vs self documenting. IMHO the most simple things are the ones I can use intuitively without reading any documentation, figuring them out from context of what I already know.
This is my biggest gripe with dagger develop
-- IMHO pre-modules dagger was simpler to adopt since it didn't require learning non-standard paradigms. If I knew how to getting started with e.g. go-sqlite, I knew how to get started with dagger. Single-file modules (in my opinion) would be going in the opposite direction of where we should be.
@helderco I don’t remember the flip-flop, but I do remember pushing for it. I really didn’t want to commit generated files and still don’t.
This is language dependent. That's the case in Python, but for instance in Go it's an anti pattern NOT to commit generated files. gRPC, gqlgen, sqlboiler, etc etc do require committing generated files. We're the only Go project I've ever encountered not to do this (which requires a Go user to read docs, since they very likely never encountered this behavior before trying dagger)
For me, IDE integration is simplicity.
@aluzzardi yeah that's fair. It is absolutely a dimension of simplicity. Perhaps my initial framing was wrong. How about this:
So while I agree that "IDE integration is simplicity", there is another dimension to this that we should acknowledge.
pre-modules dagger was simpler to adopt since it didn't require learning non-standard paradigms
But it was also completely out of reach from much of the devops community. "Choose your favorite SDK, then use our library to develop a custom CLI that will implement your CI". Maybe we were simpler to adopt for people who find that pitch compelling - but that's a very small elite group of platform engineers. Let's also acknowledge that!
Problem
As mentioned by @kjuulh and others, modules are powerful, but they have a steep learning curve. Going "from zero to first module" is major source of friction for adopting Dagger.
Solution
One possible solution: introduce the concept of single-file modules.
dagger init --sdk=foo
creates adagger.json
+ a complete boilerplate source directorydagger init --sdk=foo
creates adagger.json
+ a single boilerplate source fileIn summary:
The question of IDE support
One obvious argument against single-file modules is: "it won't work out of the box with IDEs". I think we should address the fact that our current IDE support is already not great, and we don't have a clear plan for addressing that. In fact, I don't think we even have a clear consensus on how we want to integrate with IDEs. I think to resolve the question of single-file modules, we'll need to resolve that.
I think we have 2 competing priorities: seamless IDE integration vs. simplicity.
Seamless IDE integration: developing your Dagger module should work right away, without doing any work or learning new concepts. Ideally,
git pull
, open my IDE, and boom everything works. If I don't have that, it can make the experience of developing the module frustrating, and the argument of "it's code in a language you understand!" less compelling.Simplicity. developing your Dagger module should involve as few new files and concepts as possible. When you open a PR replacing a 10-line Dockerfile with 50-file PR including a 100-line go.sum, that makes Dagger seem complex and intimidating, no matter how smooth the IDE integration.
The single-file module idea forces us to address the fact that sometimes, these two priorities conflict.
For example, we have already made the IDE integration a little worse, for the sake of simplicity, when we added generated boilerplate to
.gitignore
instead of committing them. I remember we flip-flopped a lot on this point.My personal opinion is that "seamless IDE integration" is not the right goal. It's great in theory, but it forces us to sacrifice too much simplicity. Especially when take into account fragmented ecosystems like Python: seamless for whom? Every Python developer? Does that mean we have to seamlessly integrate every known permutation of dependency management and language server support in Python tooling? Good luck with that!
Instead, I think we should 1) prioritize simplicity, 2) ship IDE integration that is not seamless, but simple and powerful. I think in practice, that requires containerizing the dev environment, and managing it in Dagger. IDE integration would then involve a devcontainer bridge, or equivalent. That makes IDE integration harder to ship, but we will need to cross that bridge eventually: the tech industry is already gravitating towards containerized dev environments, it's kind of silly that one of the most advanced container-based platforms out there, has no plan for that - not only for its own platform, which literally can only run in containers.
So, that's my conversation starter :) I know between @aluzzardi, @sipsma, @vito, @marcosnils and all the commanders, there will be plenty of opinions on this topic! Let's discuss!