microsoft / vscode-remote-release

Visual Studio Code Remote Development: Open any folder in WSL, in a Docker container, or on a remote machine using SSH and take advantage of VS Code's full feature set.
https://aka.ms/vscode-remote
Other
3.6k stars 273 forks source link

Support SELinux enabled systems #1333

Open langdon opened 4 years ago

langdon commented 4 years ago

Environment

Steps to Reproduce:

  1. use default open-folder in container
  2. choose python 3 container
  3. docker exec in to the generated container
  4. ls -l /workspaces/name-of-your-project
  5. permission denied

You can see the same problem in the normal GUI interface but it is less obvious what is going on. You also have the same issue if you use a custom container and (probably) any other container.

Basically, as far as I can tell, the bind mount of the user's devel dir into /workspaces is not using the z or Z flag that let's it work well with SELinux. I think this will be particularly a problem as you can't set that flag using the new --mount option (see: Differences between “--mount” and “--volume” on https://docs.docker.com/engine/reference/commandline/service_create/#add-bind-mounts-or-volumes) at all.

There is a workaround that I note here for anyone running into this issue but probably not something the tool should do automatically. You can chcon your devel directory to be modifiable by docker. For example: chcon -Rt svirt_sandbox_file_t /full/path/to/your/code then reattach your devel dir to the container (probably using rebuild).

chrmarti commented 4 years ago

You could set "workspaceMount" to null (or the empty string) and use "runArgs" to do the mount using --volume in the devcontainer.json.

chrmarti commented 3 years ago

There are now also built-in ways of connecting to a Docker volume:

sclel016 commented 3 years ago

What is the current recommended work around for SELinux? I'm trying to open a host workspace in a container using the dev container infrastructure built into vscode-remote. It seems that on systems with SELinux, this can only be accomplished with a bind mount and z or Z flags.

Short of cloning a repository to a volume, is there a better workflow that still involves vscode-remote?

PavelSosin-320 commented 3 years ago

@sclel016 Those people who invented SeLinux provided a tool that gives to rootless users the same power as the root user has without compromising security - FUSE and FUSE mount Look at the contributor's list of both projects. It is an important part of Rootles Docker and Podman and comes to Linux as a dependency. Using FUSE mount implementation instead of Linux mount solves the problem. The configuration that works perfectly for me in Podman: graphDriverName: overlay graphOptions: overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-1.5.0-1.fc33.x86_64 Version: |- fusermount3 version: 3.9.3 fuse-overlayfs: version 1.5 FUSE library version 3.9.3 using FUSE kernel interface version 7.31

PavelSosin-320 commented 3 years ago

@sclel016 Docker graph driver overlayfs + fuse/overlayfs should work for you on any Linux - fuse-overlayf storage driver should work for you on any Linux

aallrd commented 3 years ago

Hello,

I am using VSCode 1.57.1 and Podman 3.1.2.

I managed to mount my SELinux protected directory using this runArgs configuration:

// Required for an empty mount arg, since we manually add it in the runArgs
"workspaceMount": "",
"runArgs": [
  "--volume=/home/aallrd/work/project:/workspaces/project:Z"
]

However, I am not able to use the ${workspaceFolder} and ${workspaceFolderBasename} variables in the runArgs values for the volume command.

I am not sure if it used to worked but I remember doing something like that previously (where ${workspaceFolder} would be the folder opened with VSCode containing the .devcontainer/devcontainer.json file):

"runArgs": [
  "--volume=${workspaceFolder}:/workspaces/${workspaceFolderBasename}:Z",
]

It fails with this error:

[2021-07-06T16:59:09.847Z] Error: error creating named volume "${workspaceFolder}": error running volume create option: names must match [a-zA-Z0-9][a-zA-Z0-9_.-]*: invalid argument

Is it expected?

aallrd commented 3 years ago

Could it be linked to https://github.com/microsoft/vscode-remote-release/issues/5007 ?

chrmarti commented 3 years ago

@aallrd Thanks for filing https://github.com/microsoft/vscode-remote-release/issues/5301. Tracking the missing variables support with "runArgs" there.

lovasoa commented 3 years ago

Hello ! Has there been any news on this front ? Dev containers are currently broken on fedora.

Is there a problem with --volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z which prevents it from being enabled by default ?

chrmarti commented 3 years ago

@lovasoa Make sure to clear the "workspaceMount":

    "workspaceMount": "",
    "runArgs": ["--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"],
PavelSosin-320 commented 3 years ago

@langdon You can try it:

jibsaramnim commented 2 years ago

I ran into this for the first time just now as I'm setting up a new environment using Fedora 35. If you're making use of one of the existing devcontainer configurations or wrote your own that makes use of a docker-compose.yml file, you can achieve the same as what's mentioned above here by setting the :Z flag (a selinux specific label, apparently) on a volume defined there, like so:

services:
   app:
      # ...etc
      volumes:
         - ..:/workspace:Z

Hopefully a more official solution can be made available at some point. I'm not sure if an environment specific fix should exist in a repository's configuration file, as colleagues/collaborators might use very different environments, but at least for now this can hopefully help you get back to working on your project :).

Aricg commented 2 years ago

ha ha of course it was selinux

bradydean commented 1 year ago

I'm working on fedora 37 and getting this. The manual bind mount doesn't work, the files inside the container are owned by root

jibsaramnim commented 1 year ago

I'm working on fedora 37 and getting this. The manual bind mount doesn't work, the files inside the container are owned by root

Could you share (a snippet of) your docker-compose.yml or devcontainer.json file? I'm running Fedora 37 as-well, and have been able to continue using it as before, maybe we can spot what may be off in your config.

bradydean commented 1 year ago

Hey @jibsaramnim, this is my devcontainer.json

{
    "name": "Existing Dockerfile",
    "build": {
        "context": "..",
        "dockerfile": "../Dockerfile.dev"
    }
}

Dockerfile.dev

FROM node:18.12.1

RUN corepack enable && corepack prepare yarn@stable --activate

USER node

I also tried

{
    "name": "Existing Dockerfile",
    "build": {
        "context": "..",
        "dockerfile": "../Dockerfile.dev"
    },
    "workspaceMount": "",
    "runArgs": ["--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"]
}

but both ways have the same problem, the workspace files are owned by root

node@b8c568a240d2:/workspaces/app$ ls -l
total 100
-rw-r--r-- 1 root root    93 Dec 11 20:42 Dockerfile.dev
-rw-r--r-- 1 root root  1582 Jun 22  1984 README.md
-rw-r--r-- 1 root root   201 Jun 22  1984 next-env.d.ts
-rw-r--r-- 1 root root   137 Jun 22  1984 next.config.js
-rw-r--r-- 1 root root   465 Dec 10 02:36 package.json
drwxr-xr-x 1 root root    40 Dec 10 02:35 pages
drwxr-xr-x 1 root root    42 Dec 10 02:35 public
drwxr-xr-x 1 root root    52 Dec 10 02:35 styles
-rw-r--r-- 1 root root   509 Jun 22  1984 tsconfig.json
-rw-r--r-- 1 root root 22258 Dec 10 02:38 tsconfig.tsbuildinfo
-rw-r--r-- 1 root root 50789 Dec 10 02:36 yarn.lock

EDIT: Using docker 20.10.21 via docker desktop.

jibsaramnim commented 1 year ago

Dockerfile.dev

FROM node:18.12.1

RUN corepack enable && corepack prepare yarn@stable --activate

USER node

Correct me if I'm wrong, but are you using a non-vscode container image? There might be a difference in user IDs that causes the issue for you. Alternatively, you could try setting "remoteUser": "node" in your devcontainer.json to see if that resolves it with the container image you're using here.

Could you perhaps try starting with one of VSCode's container presets? In my case with the exact same runArgs command I have it working just fine. Same for projects where I have a docker-compose.yml file, setting the right flag there makes it work perfectly under Fedora 37.

bradydean commented 1 year ago

FWIW adding "remoteUser": "node" w/ node:18.12.1 did not work.

Using the node+typescript preset + runArgs does not work either, files are still owned by root.

{
    "name": "Node.js & TypeScript",
    "image": "mcr.microsoft.com/devcontainers/typescript-node:0-18",
    "workspaceMount": "",
    "runArgs": ["--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"]
}
bradydean commented 1 year ago

I played around with volumes with :Z with a dummy container and it doesn't appear docker is changing the selinux labels at all. Should I expect a difference in ls -Z on a file before/after it has been mounted with :Z?

jibsaramnim commented 1 year ago

Using the node+typescript preset + runArgs does not work either, files are still owned by root.

There might be something (permission related, perhaps?) going on on your particular system -- who owns the files you are trying to edit?

I just tried it with the same node+typescript preset you mentioned in a test directory, just modifying devcontainer.json to add the workspaceMount and runArgs lines exactly as you wrote them out, and it's looking fine on my end:

{
  "name": "Node.js & TypeScript",
  "image": "mcr.microsoft.com/devcontainers/typescript-node:0-18",

  "workspaceMount": "",
  "runArgs": [
    "--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"
  ]
}
node ➜ /workspaces/temp $ ls -Z
system_u:object_r:container_file_t:s0:c390,c979 readme.md  system_u:object_r:container_file_t:s0:c390,c979 test.js
node ➜ /workspaces/temp $ ls -la
total 0
drwxr-xr-x. 1 node node 58 Dec 12 08:30 .
drwxr-xr-x. 1 root root  8 Dec 12 08:32 ..
drwxr-xr-x. 1 node node 34 Dec 12 08:30 .devcontainer
-rw-r--r--. 1 node node  0 Dec 12 08:30 readme.md
-rw-r--r--. 1 node node  0 Dec 12 08:30 test.js

Are you running podman, moby-engine or docker's own set of packages?

bradydean commented 1 year ago

Files are owned by my user account. I'm using docker-desktop via the rpm package.

node ➜ /workspaces/temp $ ls -Z
system_u:object_r:container_file_t:s0:c390,c979 readme.md  system_u:object_r:container_file_t:s0:c390,c979 test.js

This is what I get inside the container

node ➜ /workspaces/next-app $ ls -lZ
total 144
drwxr-xr-x 2 root root ?   4096 Dec  7 14:28 app
-rw-r--r-- 1 root root ?    177 Dec  7 14:08 next.config.js
-rw-r--r-- 1 root root ?    201 Jun 22  1984 next-env.d.ts
-rw-r--r-- 1 root root ?    530 Dec  7 21:59 package.json
drwxr-xr-x 3 root root ?   4096 Dec  7 14:18 pages
drwxr-xr-x 2 root root ?   4096 Dec  7 13:47 public
-rw-r--r-- 1 root root ?   1582 Jun 22  1984 README.md
drwxr-xr-x 2 root root ?   4096 Dec  7 14:22 styles
-rw-r--r-- 1 root root ?    647 Dec  7 14:20 tsconfig.json
-rw-r--r-- 1 root root ? 107296 Dec  7 14:37 yarn.lock

This is outside the container

[brady@fedora next-app]$ ls -lZ
total 144
drwxr-xr-x. 2 brady brady unconfined_u:object_r:user_home_t:s0   4096 Dec  7 09:28 app
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0    177 Dec  7 09:08 next.config.js
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0    201 Jun 22  1984 next-env.d.ts
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0    530 Dec  7 16:59 package.json
drwxr-xr-x. 3 brady brady unconfined_u:object_r:user_home_t:s0   4096 Dec  7 09:18 pages
drwxr-xr-x. 2 brady brady unconfined_u:object_r:user_home_t:s0   4096 Dec  7 08:47 public
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0   1582 Jun 22  1984 README.md
drwxr-xr-x. 2 brady brady unconfined_u:object_r:user_home_t:s0   4096 Dec  7 09:22 styles
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0    647 Dec  7 09:20 tsconfig.json
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0 107296 Dec  7 09:37 yarn.lock

@jibsaramnim Are your selinux labels the same inside+outside the container? This is what I meant by I don't think docker is changing the labels correctly.

bradydean commented 1 year ago

@jibsaramnim What is the output of docker info | grep Security -A3 for you?

jibsaramnim commented 1 year ago

Are your selinux labels the same inside+outside the container? This is what I meant by I don't think docker is changing the labels correctly.

They are yes:

node ➜ /workspaces/temp $ ls -lZ
total 0
-rw-r--r--. 1 node node system_u:object_r:container_file_t:s0:c390,c979 0 Dec 12 08:30 readme.md
-rw-r--r--. 1 node node system_u:object_r:container_file_t:s0:c390,c979 0 Dec 12 08:30 test.js

Outside the container:

~/P/temp ❯❯❯ ls -lZ
total 0
-rw-r--r--. 1 davejansen davejansen system_u:object_r:container_file_t:s0:c390,c979 0 12월 12일  17:30 readme.md
-rw-r--r--. 1 davejansen davejansen system_u:object_r:container_file_t:s0:c390,c979 0 12월 12일  17:30 test.js

What is the output of docker info | grep Security -A3 for you?

docker info | grep Security -A3
 Security Options:
  seccomp
   Profile: default
  selinux

In case it helps; I am running Fedora Silverblue 37 with moby-engine and docker-compose layered. My Docker setup is as stock as can be, other than having added my own user to the docker user group.

bradydean commented 1 year ago

Cool, that's what I expected. I don't have selinux in my docker info. Seems to be an issue with docker desktop, even when I add the config option to enable selinux support. I made an issue for it here https://github.com/docker/desktop-linux/issues/104

I temporarily switched to podman and its selinux support works.

langdon commented 1 year ago

@bradydean have you considered podman desktop ? (shameless plug)

bradydean commented 1 year ago

@langdon oh nice, I didn't even know that existed. I'll play around with it.

bradydean commented 1 year ago

Well, I'm not really sure what happened, but my files inside the container are owned by root again, even using podman...

[brady@fedora foo]$ podman run --rm --user node -v $PWD/file:/file:Z mcr.microsoft.com/devcontainers/typescript-node:0-18 ls -l /
total 76
drwxr-xr-x.   1 root   root    4096 Dec 19 14:07 bin
drwxr-xr-x.   2 root   root    4096 Sep  3 12:10 boot
drwxr-xr-x.   5 root   root     340 Dec 20 23:56 dev
drwxr-xr-x.   1 root   root    4096 Dec 20 23:56 etc
-rw-r--r--.   1 root   root       6 Dec 20 23:47 file
drwxr-xr-x.   1 root   root    4096 Dec  6 09:02 home
drwxr-xr-x.   1 root   root    4096 Dec  6 02:14 lib
drwxr-xr-x.   2 root   root    4096 Dec  5 00:00 lib64
drwxr-xr-x.   2 root   root    4096 Dec  5 00:00 media
drwxr-xr-x.   2 root   root    4096 Dec  5 00:00 mnt
drwxr-xr-x.   1 root   root    4096 Dec  6 09:05 opt
dr-xr-xr-x. 472 nobody nogroup    0 Dec 20 23:56 proc
drwx------.   1 root   root    4096 Dec 19 14:07 root
drwxr-xr-x.   1 root   root    4096 Dec 20 23:56 run
drwxr-xr-x.   1 root   root    4096 Dec 19 14:07 sbin
drwxr-xr-x.   2 root   root    4096 Dec  5 00:00 srv
dr-xr-xr-x.  13 nobody nogroup    0 Dec 20 13:36 sys
drwxrwxrwt.   1 root   root    4096 Dec 19 21:05 tmp
drwxr-xr-x.   1 root   root    4096 Dec  5 00:00 usr
drwxr-xr-x.   1 root   root    4096 Dec  5 00:00 var
bradydean commented 1 year ago

Anyways, podman unshare chown 1000:1000 file fixed that, and that reminded me of docker desktop's file sharing options. I already had /home in there, but I removed it, then added /home/brady and docker desktop is working now.

EDIT: It worked once and only once.. EDIT2: Did some more playing around and apparently podman unshare will correct the perms for docker-desktop

ctron commented 1 year ago

Is there a real solution now to this? The workarounds I saw all seem to require patching the devcontainer configuration. Which may work for one setup, but not for another. So as the original reported mentioned, I would expect some out-of-the-box support for this.

theonlyfoxy commented 1 year ago

as a workaround you could set remoteUser to root.

example devcontainer.json:

{
"remoteUser": "root",
"containerUser": "vscode",
"workspaceMount": "",
"runArgs": ["--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"]
}

also see.

TommyTran732 commented 1 year ago

I ran into this issue with the Docker package from the official Fedora repository. However, when I switched to using the package from the upstream docker repo, the problem goes away. There's no need to manually set the :z or :Z flag. I am not sure what has change though.

ctron commented 1 year ago

To my understanding that simply drops the SElinux support and runs everything as root. Which might not be everyone's cup of tea.

sanmai-NL commented 7 months ago

as a workaround you could set remoteUser to root.

example devcontainer.json:

{
"remoteUser": "root",
"containerUser": "vscode",
"workspaceMount": "",
"runArgs": ["--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"]
}

also see.

This does not work when your image has tooling installed specifically configured for the unprivileged user (PATH, standard directories, etc.).

geoffreysmith commented 2 months ago

No I have a feeling why only containerd+ docker are used in the k8s lightweightvm + containterd. There’s a few other containers that are allowed. Basically from what I gather the eventual work around is to assume containers are run under hypervisor (visor now) ignored by demonic and have selinyx ignore all non objects and containers.

I believe there’s a system call made in docker that ignores bind/volumes and as the hypervisot intercepts all Linux calls it makes more sense to patch it then break all of docker.

Can someone direct me to a github repo they tested this on? I can trace the syscall and see if disabling apparmor seccomp in containers fixes this.

This feels like a historical docker issue that it is easier to rewrite a hypervisor then change docker.

Malix-off commented 2 months ago

So what should be the default minimal addition to .devcontainer/devcontainer.json file to make Podman run on SELinux ?

So far I've seen 3 versions (excluding the :Z variant which is basically cheating), but I don't know which is the best, and what some options really does and why do they work

  1. vscode docs

    "runArgs": [
        "--userns=keep-id"
    ],
    "containerEnv": {
        "HOME": "/home/node"
    }
  2. universal blue - devcontainer setup

    "runArgs": [
        "--userns=keep-id:uid=1000,gid=1000"
    ],
    "containerUser": "vscode",
    "updateRemoteUserUID": true,
    "containerEnv": {
        "HOME": "/home/vscode"
    },
  3. universal blue - podman support

    "runArgs": [
        "--userns=keep-id",
        "--security-opt=label=disable"
    ],
    "containerEnv": {
        "HOME": "/home/vscode"
    },
geoffreysmith commented 2 months ago

Install qVisor run containers in a lightweight hypervisor set SELinux to ignore objects and just binaries. Call it a day. If the containers need to talk to each other there’s tompr or something. Dev containers are made for old school Dockers running root not anything OCI compliant.

Malix-off commented 2 months ago