eclipse-che / che

Kubernetes based Cloud Development Environments for Enterprise Teams
http://eclipse.org/che
Eclipse Public License 2.0
6.96k stars 1.19k forks source link

Separate vscode-java version from development target #16851

Open ericwill opened 4 years ago

ericwill commented 4 years ago

The Problem

Solutions

We need a way to separate the Java tooling environment from the Java target development environment/compliance level. Investigation needs to be done first, but an ideal solution would allow:

Subtasks/issues to follow after suitable investigation into potential designs has been conducted.

Sub-issues:

tsmaeder commented 4 years ago

Separation of Java runtime and target

Currently, we have no possiblity in Che to run the Java language server included with VSCode-Java (jdt.ls) with a different JDK than the one jdt.ls runs on itself. This is a problem for a number of reasons:

  1. Pretty soon, jdt.ls will require Java 11 to run. This would meant that we cannot provide a devfile that will use Java 8 to provide language services for Java 8 projects. For example, when looking up the source of java.io.File in the debugger, it will always present the user with the source from Java 11, not Java 8.

  2. Currently, we need to maintain a separate plugin for each of the Java versions we want to support. The problem is worsened by the fact that we have multiple plugins that include Java support, in particular Quarkus. All these versions need to be maintainted.

  3. Even if we had versions of our plugins for each major Java version, we would be overwhelmed with maintaining a plugin for each Java version published. One of the goals of developing with containers is to prevent the "it works on my machine"-problem. So ideally, jdt.ls would see the exact same version of the JDK the user will use to run their program in production.

Goals

  1. Run jdt.ls with a different JDK than the target JDK This is non-optional becuase jdt.ls needs to work against Java 8, while running on Java >= 11

  2. Provide only one version of the Java plugin and still support multiple JDK target versions

  3. Use a user-provided target JDK

Approaches

jdt.ls has the possiblity to use a different JDK as the target than the one it's running on. A list of jdk install paths can be passed as a VS Code-Java preference. We can provide preference defaults for Theia in a devfile.

So the problem becomes one of presenting a second JDK to VS Code Java in a location on the file system.

1. Include the JRE with the plugin

The trivial solution to the problem is to include a target and a runtime Java version in the Java plugins that we already have. We'd use the same runtime (Java 11) for jdt.ls and add a second jdk to serve as a target JDK. This addresses goal 1, but not goals 2 and 3.

2. Run jdt.ls in the "dev" container

Theoretically, we could also run jdt.ls on the user-provided "dev container": however, that would still require two JDK containers to be present on the dev-container, one to run jdt.ls and one as a target.

3. Mount the JDK into the container

Beyond that, we would have to provide the JDK to jdt.ls from the outside. We can separate this issue into two parts: first, the JDK files need to be provided somehow, and second, the file need to be consumed by VS Code Java, and ultimately jdt.ls.

For the consumption side, @benoitf has suggested we might try to overrride the default file system provider for the Java. There might then be a local server container in the pod that could serve the necessary files via some protocol.

The only other way seems to be via a kubernetes volume mounted in the Java plugin container.

One idea was to run a local file server in a container the pod that would server a JDK directory installed in that container and to mount that JDK inside the Java plugin sidecar. However, I don't believe that Kubernetes and/or OpenShift would allow that in a container.

The next approach would be to provide the JDK in a kubernetes volume. Option one would be a non-persistent volume. This would mean we have to put the JDK into that volume each time. I did a quick test on openshift.io copying the jdk 11 directory from /usr/lib/jvm into /projects (ca. 250mb) : when using ephemeral storage, the copy took <1.5 seconds. When using persistent storage, the copy took ~15s.

That measurement raises the question whether it's worth using any persistent storage for the JDK considering the performance of the persistent storage is likely to be worse than what can be expected from ephemeral storage.

Proposal

So my proposal would be to package JDK versions as init containers. When the init container is runs, it will copy the included JDK to a directory given as an env variable. A devfile for a particular version of Java would then point jdt.ls to a "target runtime" by including the proper VS Code-Java settings. I would leave the choice of using either persistent or ephemeral storage up to the devfile. I believe currently, there is no way to specify a volume as "ephemeral" or "local": that can only be done globally for the devfile.

benoitf commented 4 years ago

I think the idea is to be able to use the existing java images available on docker registries [1] (and not to package and maintain ourselves images)

For example https://hub.docker.com/_/openjdk?tab=tags

so if proposal is working fine with these images

virtual filesystem approach would have been cool to me because you could always shared that to many workspaces. Less environment variables, more automatic discovery.

tsmaeder commented 4 years ago

@benoitf we are already capable of overriding container entrypoints: it's just a matter of doing a "cp $SRC $DEST in the container. This mechanism would and should not be restricted to Java. I'm pretty sure it would work on a kubernetes level, what remains to be seen is how we can express it in a devfile (some work might be required).

tsmaeder commented 4 years ago

So here's a devfile that sets up openj9 as the target jdk:

metadata:
  name: testJDKVersionKube
projects:
  - name: console-java-simple
    source:
      location: 'https://github.com/che-samples/console-java-simple.git'
      type: git
      branch: java1.11
attributes:
  persistVolumes: 'false'
components:
  - id: redhat/java11/latest
    type: chePlugin
  - mountSources: true
    memoryLimit: 512Mi
    type: dockerimage
    volumes:
      - name: jdk
        containerPath: /tmp/jdk
      - name: m2
        containerPath: /home/user/.m2
    image: 'quay.io/eclipse/che-java11-maven:7.13.1'
    alias: maven
    env:
      - value: ''
        name: MAVEN_CONFIG
      - value: >-
          -XX:MaxRAMPercentage=50 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10
          -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4
          -XX:AdaptiveSizePolicyWeight=90 -Dsun.zip.disableMemoryMapping=true
          -Xms20m -Djava.security.egd=file:/dev/./urandom
        name: JAVA_OPTS
      - value: >-
          -XX:MaxRAMPercentage=50 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10
          -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4
          -XX:AdaptiveSizePolicyWeight=90 -Dsun.zip.disableMemoryMapping=true
          -Xms20m -Djava.security.egd=file:/dev/./urandom -Duser.home=/home/user
        name: MAVEN_OPTS
      - value: >-
          -XX:MaxRAMPercentage=50 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10
          -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4
          -XX:AdaptiveSizePolicyWeight=90 -Dsun.zip.disableMemoryMapping=true
          -Xms20m -Djava.security.egd=file:/dev/./urandom
        name: JAVA_TOOL_OPTIONS
  - referenceContent: |
      apiVersion: v1
      kind: Pod
      metadata:
        name: ws
      spec:
        initContainers:
          - name: init-myservice
            image: adoptopenjdk/openjdk11-openj9
            command: ['sh', '-c', "echo starting && cp -R /opt/java/openjdk/ /tmp/jdk && echo done && sleep 20"]
            volumeMounts: 
              - name:  remote-endpoint
                mountPath: /tmp/jdk
    type: kubernetes
apiVersion: 1.0.0
commands:
  - name: maven build
    actions:
      - workdir: '${CHE_PROJECTS_ROOT}/console-java-simple'
        type: exec
        command: mvn clean install
        component: maven
  - name: maven build and run
    actions:
      - workdir: '${CHE_PROJECTS_ROOT}/console-java-simple'
        type: exec
        command: mvn clean install && java -jar ./target/*.jar
        component: maven

In addition, you'll have to add the following to the user theia settings:

"java.configuration.runtimes": [
    {
        "name": "JavaSE-11",
        "path": "/remote-endpoint/openjdk",
        "default": true
    }
tsmaeder commented 4 years ago

A little demo: https://youtu.be/NLuG7gWdL6w

tsmaeder commented 4 years ago

There are two problems currently:

tsmaeder commented 4 years ago

Note that in the PoC, I use a stock jdk image. Also, we could use the maven container as an init container, as well: pulling it twice should not be expensive.

l0rd commented 4 years ago

The init container solution solves nicely the goal n.3: "Use a user-provided target JDK".

But I am wondering if, for the target JDKs that we are going to include in Che (because we are still going to provide some default target JDK right?) this is a better solution compared to different versions of the java plugin sidecar.

Using an init container adds an extra step to the workspace bootstrap. Using an init container makes the meta.yaml harder to write and read. These are costs that we can accept to pay if it has value for the user. But if 99% of the time users will just use the default target JDK is it really worth it? Shouldn't we just document how to use init containers to inject user-provided target JDKs and use sidecars for the default ones?

In your example you are defining the init Container in the devfile rather than in the meta.yaml. Is that just for PoC purposes? Because if it's not in a meta.yaml it won't be reusable in other devfiles.

tsmaeder commented 4 years ago

I don't think the argument that the complication of writing a devfile is really a problem: we have to have the init container, whether we have it in the devfile or in a meta.yml. If we write the devfile, complication is not a problem: if the users bring their own jdk, writing a devfile with an init container is no more nor less complicated than writing a meta.yml with an init container. (slightly less even, since they see them all the time). What we'll end up with will be 1 Java plugin and n devfiles where n is the number of JDK's we want to support. That is important in order to reduce unnecessary maintenance and testing overhead. @lord Could you describe the alternative solution you have in mind? How would you reuse the init container?

tsmaeder commented 4 years ago

Also, for any m+n solution (as oppposed to m*n), we'll need an additional container in the workspace, whether it be a plugin per JDK version or an init container referenced in devfile. Copying the JDK is cheap: cf. the numbers above.

l0rd commented 4 years ago

What we'll end up with will be 1 Java plugin and n devfiles where n is the number of JDK's we want to support.

Users that need to create their custom java stack BUT are ok to use a default JDKs won't have an easy way to do that: they still will need to use the initcontainer mechanism as if they were including a "user-provided" JDK. And imo this is what 99% of users will want to do.

Could you describe the alternative solution you have in mind? How would you reuse the init container?

If you include the initContainers mechanism in a meta.yaml you will be able to publish it in the plugin registry and user will be able to include the plugin in their devfiles.

tsmaeder commented 4 years ago

Users that need to create their custom java stack BUT are ok to use a default JDKs

Why would they want a custom stack if they use the default JDK's? And they would just copy one of the provided devfiles. Simples!

l0rd commented 4 years ago

Even if they use the same jdk as in our getting started they may want to add some tools, change the projects, change commands...

ghost commented 4 years ago

@ericwill wdyt ^?

ericwill commented 4 years ago

IMO we should steer users towards writing devfiles -- of varying complexities. This means thoroughly documenting as many workflows as possible along with their configurations. Of course, adding more devfile samples will also help here -- if we can cover 90% or more of all use cases with good documentation/samples, then we are doing a good job.

I think initContainers in the devfile is the way forward, for a couple of reasons:

Having users write meta.yaml files for plugins is something we should avoid -- once we have proper plugin automation in place, user involvement with writing meta.yaml files should be a thing of the past.

Even if they use the same jdk as in our getting started they may want to add some tools, change the projects, change commands...

Maybe I am misunderstanding, but shouldn't this happen in the devfile regardless? From my POV:

  1. The plugin registry is a static list of available tools (plugins)
  2. A devfile defines a way to create a workspace using those pre-defined tools, with configuration options for said tools being defineable/configurable in the devfile

We should focus on making 2. easy, while still allowing it to be powerful for advanced users.

l0rd commented 4 years ago

@ericwill initContainers are the way forward when users want to bring their own JDK.

But why do you want users to specify an initContainer

# using init container
---
components:
  - id: redhat/java11/latest
    type: chePlugin
  - referenceContent: |
      apiVersion: v1
      kind: Pod
      metadata:
        name: ws
      spec:
        initContainers:
          - name: init-myservice
            image: adoptopenjdk/openjdk11-openj9
            command: ['sh', '-c', "echo starting && cp -R /opt/java/openjdk/ /tmp/jdk && echo done && sleep 20"]
            volumeMounts: 
              - name:  remote-endpoint
                mountPath: /tmp/jdk
    type: kubernetes

when we can package that for them:

# including jdk8 in the sidecar
---
components:
  - id: redhat/java8/latest
    type: chePlugin

And when I say "we can package" I mean that using the vscode-extensions.json format we could reference multiple Dockerfiles for the same vscode-extension:

{
   "repository": "https://github.com/redhat-developer/vscode-java",
   "checkout": "v0.57.0",
   "dockerfiles": [
       "./dockerfiles/redhat-developer.vscode-java/Dockerfile.java8",
       "./dockerfiles/redhat-developer.vscode-java/Dockerfile.java11"
    ]
}
l0rd commented 4 years ago

@ericwill beyond the 14 extra lines needed in EVERY java devfile you also need to consider that the startup of the workspace will be slowed down by

And all of that for what? What is the benefit? I cannot see one honestly.

ericwill commented 4 years ago

Well there is the maintainability overhead. Using initContainer we would have to maintain only one Java plugin with one sidecar, and for other versions users bring their own JDK. We can facilitate this of course by providing pre-made devfile samples for some of the popular JDK's (11, 14, etc. come to mind) all while just maintaining the one plugin and one sidecar.

Using the meta.yaml proposal, we'd have to maintain n number of sidecar images, as well as keep some number of Java plugins up-to-date and tested on each of those sidecars. Even with the automation it becomes a headache if we support more than two JDK versions.

@ericwill beyond the 14 extra lines needed in EVERY java devfile you also need to consider that the startup of the workspace will be slowed down by

* the pull of the `initContainer` image

* the copy of the JDK.
  Two operations that are NOT needed in the sidecar way.

The pulling I can see as a performance hit, can we optimize it somehow? For our pre-made devfile samples, can't we make use of the kubernetes image puller to cache the JDK images?

As for the copying of the JDK, from my understanding it's pretty cheap (@tsmaeder could you confirm)?

l0rd commented 4 years ago

Ok as I see it either we stop supporting jdt.ls with JDKs other than v11 so that we have less things to maintain or we continue supporting jdt.ls with popular JDKs so that we make users life easier.

But providing some sample devfiles with popular JDKs and NOT providing pre-packaged plugins we will have worst of both worlds! We will have to test jdt.ls against all JDKs AND users will need to write 14 extra lines of yaml to use those JDKs in their devfiles.

tsmaeder commented 4 years ago

@ericwill @l0rd copy of the JDK is around 1.5 seconds on openshift.io, as mentioned further up in this issue. Also, if we provide a different JDK as a container, we will always have to pull that container at some time. It doesn't matter if that container is referenced in a meta.yml or a devfile. So the net impact of the approach laid out above is over what you champion is ~1.5 seconds per workspace start.

@l0rd if putting init containers into a workspace is so verbose, why not simplify it: we control the devfile syntax and can add any syntactic sugar we want. The 14 lines are there because I have to use the "kubernetes" type component and use the raw kube syntax for an init container. We could simplify that to something as simple as:

- initContainer: adoptopenjdk/openjdk11-openj9
  command: ['cp -R /opt/java/openjdk/ /tmp/jdk]

Also, we won't have to test the Java plugin against any JDK's. We'll always run jdt.ls on the same JDK. What we will have to test is that the examples in the devfiles work as intended, but that is a much smaller set of things that could go wrong.

Also, we would not have to maintain a set of images for the various JDK's. We would just use the stock JDK images.

I guess where we really differ is whether it is desirable to have users bring their own JDK to compile and debug against. I am convinced that people want exactly that. After all, one of the selling points of both containers in general and the Che 7 workspace model is that the environment in development is the same as the runtime model.

Ok as I see it either we stop supporting jdt.ls with JDKs other than v11 so that we have less things to maintain or we continue supporting jdt.ls with popular JDKs so that we make users life easier.

No, that is not the alternative: we just support other target JDK's via devfiles instead of making a custom plugin for each JDK version we support.

l0rd commented 4 years ago

@ericwill @l0rd copy of the JDK is around 1.5 seconds on openshift.io, as mentioned further up in this issue. Also, if we provide a different JDK as a container, we will always have to pull that container at some time. It doesn't matter if that container is referenced in a meta.yml or a devfile. So the net impact of the approach laid out above is ~1.5 seconds per workspace start.

We do not have to pull an init-container if the target JDK is embedded in jdt.ls sidecar.

And we need a really good reason to accept to add 1.5s to the workspace startup process! We are trying hard to reduce seconds, adding even a fraction of second for no good reason is NOT acceptable.

@l0rd if putting init containers into a workspace is so verbose, why not simplify it: we control the devfile syntax and can add any syntactic sugar we want.

That's a good point. I agree, we should do that for user provided JDKs. There are still 2 extra lines though.

Also, we would not have to maintain a set of images for the various JDK's. We would just use the stock JDK images.

But that's what we are already doing. We are already leveraging 3rd party images rather than creating ours. The problem is that even if we use 3rd party images those get updated and we need to verify that the updates don't break our getting started. We have hundreds of 3rd party artifacts to maintain and we won't get rid of that with your proposal.

To summarize it is not clear what's the maintaince burden that you are experiencing today and how you are going to get rid of it. Because at the end that's the only good reason that would justify the extra costs (startup time and a more yaml lines in devfiles).

tsmaeder commented 4 years ago

So in your world, how would we support users that want to use OpenJ9? The only way I see with your proposal is to add plugins for Java, Quarkus etc. for all versions of OpenJ9, as well. That is a pretty big maintenance burden, in my book. Also, even the fact that we have to maintain plugins for the select few JDK versions we have now is something that we should get away from: it was a workaround for when jdt.ls could not separate the target JDK from the runtime. Apart from the maintenance, it makes no sense from the user's point of view, or have you ever wondered if you want to download Eclipse for Java 11 or Eclipse for Java 8?

l0rd commented 4 years ago

So in your world, how would we support users that want to use OpenJ9? The only way I see with your proposal is to add plugins for Java, Quarkus etc. for all versions of OpenJ9, as well. That is a pretty big maintenance burden, in my book.

This is the user provided JDK use case right? And as I think I have mentioned in all the comments above I am +1 to use your proposal for those use cases. That makes perfect sense.

With regards to vsx dependencies: let's consider that we have this pending issue that would allow to merge all vsx in one container.

ericwill commented 4 years ago

To summarize it is not clear what's the maintaince burden that you are experiencing today and how you are going to get rid of it. Because at the end that's the only good reason that would justify the extra costs (startup time and a more yaml lines in devfiles).

The cost we are trying to avoid is maintaining multiple plugins and their respective sidecars. Right now we need to maintain two plugins and two sidecars, just for the Java plugin. When we want to start supporting newer JDKs this cost will increase.

By the way I am not talking about other tools/containers like the jdk-maven images, those are separate and nothing changes there -- I am specifically talking about vscode-java/jdt.ls and its sidecar. We will still have to validate different Java versions and samples in the devfiles we provide -- this proposal isn't trying to change that.

Consider the following: let's say today we wanted to add Java 14 support. With the current mechanism we would need to:

With the initContainer approach the same scenario would look like this:

In the second scenario the only maintenance associated with the Java plugin is maintaining one plugin in the registry, and one sidecar. The Java version there is whatever jdt.ls' preferred runtime is -- Java 11 in this case as its the minimum. Adding support for new Java versions then only becomes a question of adding a new devfile with samples, and testing it (which is already required today, as you pointed out), and adding an initContainer and pointing jdt.ls to whatever the target JDK is.

Also +1 to making the devfile syntax easier -- I think 2 lines is much more manageable than 14.

benoitf commented 4 years ago

Hi, IMHO the problem with saying "In the second scenario the only maintenance associated with the Java plugin is maintaining one plugin in the registry, and one sidecar",

is that you push maintenance of JDK as well to users.

They don't need to only select "plugin Java11, Java12, Java13, Java14", they do need to specify more stuff in each of their devfile

ericwill commented 4 years ago

Hi, IMHO the problem with saying "In the second scenario the only maintenance associated with the Java plugin is maintaining one plugin in the registry, and one sidecar",

is that you push maintenance of JDK as well to users.

They don't need to only select "plugin Java11, Java12, Java13, Java14", they do need to specify more stuff in each of their devfile

True, maybe we can improve it? Contribute a plugin that allows for easy switching of JDK target environments through the UI:

Once a selection is made we could generate the appropriate lines and add them to the devfile and restart the workspace. Or maybe that's not possible (it's just a thought)?

l0rd commented 4 years ago
  • Add a new plugin to the registry

That's not needed with the new vscode-extensions.json as described here. One extension, n sidecars:

{
   "repository": "https://github.com/redhat-developer/vscode-java",
   "checkout": "v0.57.0",
   "dockerfiles": [
       "./dockerfiles/redhat-developer.vscode-java/Dockerfile.java11",
       "./dockerfiles/redhat-developer.vscode-java/Dockerfile.java14"
    ]
}
  • Add a new sidecar to the che-sidecar-java repo

That's equivalent to "add a new initContainer" in the new approach

  • Maintain both for x number of sprints

You need that in the new approach as well: maintain initcontainer + plugin

  • Add a new Java 14 devfile with code samples, Java 14 maven image, and other tools

You need that in the new approach as well

ericwill commented 4 years ago
  • Maintain both for x number of sprints

You need that in the new approach as well: maintain initcontainer + plugin

AIUI the initContainer does not require active maintenance, is it not just an upstream JDK image? Like openjdk, for example.

tsmaeder commented 4 years ago

@l0rd could you describe where that vscode-extensions.json and how the "dockerfiles" mechanism would work? It's the first time I hear of it.

l0rd commented 4 years ago

AIUI the initContainer does not require active maintenance, is it not just an upstream JDK image? Like openjdk, for example.

@ericwill In both cases, in an initContainer and in a Dockerfile, you are copying a JDK into an image. The only difference is when you do that (at runtime or at build time). Hence you are only moving the maitainance problem somewhere else. To illustrate my reasoning here is a Dockerfile that does the same copy as the initcontainer + volume mechanism above:

FROM adoptopenjdk/openjdk11-openj9 as jdk

FROM quay.io/eclipse/che-sidecar-java:11-f76ca45
COPY --from=jdk /opt/java/openjdk/ /remote-endpoint/openjdk

Let me be clear: pre-built sidecars are better for the 2-3 JDKs that we support BUT the initcontainer approach is much better for users that want to user their own JDK because they only need to add a few lines to their devfile instead of 1) do a build of a sidecar image 2) push it to an OCI registry 3) build a new plugin registry image 4) rollout the new plugin registry. In that case the extra costs of the initContainer approach are justified.

@l0rd could you describe where that vscode-extensions.json and how the "dockerfiles" mechanism would work? It's the first time I hear of it.

@tsmaeder this is described in #17029 that is part of this epic #15819.

ericwill commented 4 years ago
  • Add a new plugin to the registry

That's not needed with the new vscode-extensions.json as described here. One extension, n sidecars:


{
   "repository": "https://github.com/redhat-developer/vscode-java",
   "checkout": "v0.57.0",
   "dockerfiles": [
       "./dockerfiles/redhat-developer.vscode-java/Dockerfile.java11",
       "./dockerfiles/redhat-developer.vscode-java/Dockerfile.java14"
    ]
}

How does this work exactly? Maybe it's more for #17029 (or I missed something), but one plugin runs in one sidecar, correct? Wouldn't we still need multiple Java plugins (java8, java11, java14) for each sidecar defined there?

l0rd commented 4 years ago

How does this work exactly? Maybe it's more for #17029 (or I missed something), but one plugin runs in one sidecar, correct? Wouldn't we still need multiple Java plugins (java8, java11, java14) for each sidecar defined there?

Yes I have added a comment in #17029. One entry in the extension.json may result in one or more plugins.

nickboldt commented 4 years ago

Hey, there, long time listener, first time caller.

I have a couple questions:

a) is it an existing customer need that we have the ability to easily (ie., UI toggle) between installed/available JDKs? Since I almost never write java anymore, and since old JDKs are just ancient history, I don't know if a Che or CRW customer would legit care about having more than 1 JDK available in the IDE, like one can with desktop Eclipse. Isn't it enough that I can have different devfiles with different JDKs in them? Or with multiple JDKs? I could still compile the same code two ways in the same workspace, or run builds in separate workspaces to compare results.

b) the ability to disconnect the JDK from the plugin sidecar would allow us to WAY MORE EASILY support new arches like s390x (Z for Linux) and ppc64le (IBM Power), as we would NOT have to have two essentially-identical images:

and

Instead we would be able to reuse existing Red hat Container Catalogue openJDK & openJ9 images as is, and would also therefore benefit from faster CVE fixes from that team independent of the Che/CRW development/productization efforts.

(Turns out that second one wasn't a question, but rather an endorsement of the idea of JDK decoupling from the sidecar, given the need to support multiple arches with alternate JVMs like openJ9.)

nickboldt commented 3 years ago

I've heard this won't get into 7.21 / 7.20.2 so I'll set a 7.22 milestone for now just so it's got a target to work toward

ericwill commented 3 years ago

I've heard this won't get into 7.21 / 7.20.2 so I'll set a 7.22 milestone for now just so it's got a target to work toward

Thanks Nick. I've added the remaining sub-issues in the OP of this thread.

che-bot commented 3 years ago

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

nickboldt commented 3 years ago

Would using https://sdkman.io/ get us closer to having multiple JDK support inside Che/CRW in the same container (including airgapped support) ?

Eg., a container with only JDK 8 and 14 preloaded, and if an airgapped customer wants something else like 11 or 15, they'd have to figure out how to break thru the airgap to download the other SDK rpms into their environment.

tsmaeder commented 3 years ago

if an airgapped customer wants something else like 11 or 15, they'd have to figure out how to break thru the airgap to download the other SDK rpms into their environment.

That's exactly the behaviour that you want to prevent with an airgap. So no, I don't think that is going to fly. The solution presented above neatly takes care of the problems, we just need to go ahead and implement it.