cogent3 / Cogent3Workshop

Materials for the Phylomania workshop
BSD 3-Clause "New" or "Revised" License
8 stars 4 forks source link

Test docker documentation and container #30

Closed khiron closed 9 months ago

khiron commented 10 months ago

Using the documentation in installing docker wiki try

khiron commented 10 months ago

Give it a go without my assistance, and take notes about anything (no matter how trivial) that caused any road blocks. I'll schedule time with y'all next week to zoom/screenshare with you as needed

fredjaya commented 10 months ago

OS: Pop!_OS 22.04 LTS x86_64 Kernel: 6.4.6-76060406-generic Shell: bash 5.1.16


Worked well!! I found the video was clear, well-paced, and easy to follow.

The main issue on my linux machine was that all docker commands required root privileges. Specifics below with timestamps.

Also not too sure if this is obvious, but should (example) data be moved into the container at all?

3:00

My vscode was slightly different when selecting the git widget:
Pasted image 20231026152511

So I changed branches using the footer menu instead (clicking on "main"): Pasted image 20231026152812

4:45

docker build --tag cogent3workshop -f ./docker gave:

ERROR: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/_ping": dial unix /var/run/docker.sock: connect: permission denied

Fixed with sudo docker build --tag cogent3workshop -f ./docker

5:35

Same as above, ran sudo docker run -it --rm -p 8888:8888 -v ${PWD}:/workspace cogent3workshop

8:00

Open remote -> Attach to running container -> "Current user does not have permission to run 'docker' ". Fixes in linux post-install but didn't attempt.

KatherineCaley commented 10 months ago

chip: Apple M2 Max OS: MacOS Ventura (a) ———

  1. I clicked on "Docker Desktop for Mac with Apple silicon" not "Get Docker". Possibly you could note that there is a difference between silicon and intel macs and users need to choose the appropriate one

  2. When opening docker, I needed to accept the terms and conditions first, and pick which settings I wanted to use (normal or advanced settings ), then either sign in or continue without signing in, and then let docker know what I plan to use docker for. (all pretty trivial but might be worth noting that you dont need to sign in)

  3. There was no docker icon on top right as described. If you click on "docker desktop" on the top left, then "About" it will give you information on the version. Also if you click on the settings icon on the top right and then "software updates" it will let you know if you require any updates

WAtching screen cast:

not clear if you mean your computers terminal or terminal in docker. Also when I ran docker info I had no containers running, it wasn't clear if i should have had three since you had three in the screen cast

i dont have the setup for typing "code" in the terminal and VSC opening.. and you dont have that as a requirement for the VSC set up, maybe let people know they can just open folder in VSC if they dont have that set up

selecting the branch is on the bottom left of the source control widget for Mac

Also its not obvious how to run the below with my current admin rights on my mac. I might have run out of su privledges.

❯ docker build --tag cogent3workshop -f ./docker/DockerFile

ERROR: "docker buildx build" requires exactly 1 argument.
See 'docker buildx build --help'.

Usage:  docker buildx build [OPTIONS] PATH | URL | -

Start a build

su didnt work for me, this is where I gave up...

su docker build --tag cogent3workshop -f .\docker
Password:
su: Sorry
rmcar17 commented 10 months ago

OS: Windows 10 Home: Version 10.0.19045 Build 19045


Installing Docker The installation instructions refer to setting up docker with the Hyper-V backend. Hyper-V, being a Windows 10 Pro feature, meant that I was unable to proceed with that step.

I noted there was a WSL 2 backend (which was alluded to in step 4), so I continued with the remainder of the instructions. Seemed to work despite my computer using WSL 1.

Docker did not automatically start after installation.

Building the Docker Image It took just under 3 minutes to build the image on my system rather than the suggested 90 seconds.

Running the docker container

To start a Docker container using the image you just built, run the following command in a linux terminal (eg: from the terminal in VS Code)

Didn't set up vscode to use WSL in the installation instructions, so of course the command doesn't work. The second command does instead with the appropriate path.

Also, though case-insensitive, the clone path is capitalised "Cogent3Workshop" rather than all lowercase.

As an aside, switch to using jupyter lab rather than jupyter notebook?

Docker in VSCode There appears to be no workspace directory. VSCode extensions need to be installed again. Getting a "Error: EROFS: read-only file system, open '/root/.vscode-server/extensionsCache/930828eb-0a56-44d8-8379-28e4d237b3b3'".

Tried to restart the docker container, and now it can't start with error "docker: Error response from daemon: mkdir /var/lib/docker/overlay2/c8f375542f2d26bee361485eee2a1a43a9e974a06d5a9ff1eb2047623720aa48-init: read-only file system.".

At about the same time, my C drive ran out of space so that might be something to do with it (on my laptop I have a small SSD which mainly contains the OS which docker default installed to rather than my bigger drive). Going to see if I can reinstall it all later onto the other drive and see how I go.

khiron commented 10 months ago

Also not too sure if this is obvious, but should (example) data be moved into the container at all?

The current directory in the host OS is mounted to the /workspace directory in the container when the container is run. So if you have data in your current directory inside the container that will be available from /workspace and any changes to those files in the container will change the files on the host OS.

khiron commented 10 months ago

Open remote -> Attach to running container -> "Current user does not have permission to run 'docker' ". Fixes in linux post-install but didn't attempt.

These issues of requiring elevated priviledge to build images and run containers should be addressed by adding the user to the docker group:

sudo usermod -aG docker <username>

I am going to rework the linux instructions as DockerDesktop has recently been released for linux. It will still need sudo to install but will create the docker group and add the current user to it.

khiron commented 10 months ago

I noted there was a WSL 2 backend (which was alluded to in step 4), so I continued with the remainder of the instructions. Seemed to work despite my computer using WSL 1.

I upgraded my system from WSL1 to WSL2 in 2021 when they added GPU support. It was a pretty simple upgrade, and it resulted in some performance increase. My old VMs stayed WSL1 until I upgraded then to WSL2.

From memory this is roughly the process I used

https://dev.to/adityakanekar/upgrading-from-wsl1-to-wsl2-1fl9

GavinHuttley commented 10 months ago

I suggest adding a "We assume WSL 2", it's possible someone will turn up with older WSL in which case Robert can help them out!

khiron commented 10 months ago

At about the same time, my C drive ran out of space so that might be something to do with it (on my laptop I have a small SSD which mainly contains the OS which docker default installed to rather than my bigger drive). Going to see if I can reinstall it all later onto the other drive and see how I go.

@rmcar17
It's a little involved on windows to move the location that Docker Desktop will store images. On other platforms you should just be able to go to settings / Resources / Disk image location in docker desktop

With windows it's more complicated as the virtualization is being managed by Windows system for linux

You should be able to ask WSL what images (called distributions in windows) are loaded in powershell with

wsl --list --running

It will give you something like this

PS C:\Users\Richard> wsl --list --running
Windows Subsystem for Linux Distributions:
ubuntu-20.04 (Default)
docker-desktop
docker-desktop-data

Shut down docker desktop and confirm it's distros are not running in WSL with the above

Then you need to export the distro to a tarball on your bigger drive (assuming D: here)

wsl --export docker-desktop-data D:\docker-desktop-data.tar

Then you need to unregister the distro from wsl

wsl --unregister docker-desktop-data

Then you reimport it

wsl --import docker-desktop-data D:\my_new_location D:\docker-desktop-data.tar --version 2

Then restart Docker Desktop and you should be good to go. I did this back when I switched to WSL2 because I also had a too small primary drive, and too many large WSL images.

rmcar17 commented 10 months ago

I upgraded my system from WSL1 to WSL2 in 2021 when they added GPU support. It was a pretty simple upgrade, and it resulted in some performance increase. My old VMs stayed WSL1 until I upgraded then to WSL2.

I've wanted to upgrade to WSL2 for quite a long time. Though for whatever reason the internet doesn't work on WSL2 on my machine so after much troubleshooting whenever I've tried I've been forced to downgrade back to WSL1.

Interestingly however, despite having WSL2 not WSL1 and not having Hyper-V enabled, I was still able to set up docker and create the environment (I suspect other issues were to do with my C drive running out of space). I'll try again this afternoon.

rmcar17 commented 10 months ago

@rmcar17 It's a little involved on windows to move the location that Docker Desktop will store images. On other platforms you should just be able to go to settings / Resources / Disk image location in docker desktop

With windows it's more complicated as the virtualization is being managed by Windows system for linux

I saw that option as well before I uninstalled docker, and it seemed to attempt to move things across (before I accidentally killed and corrupted it).

I followed a similar process to get my Ubuntu distro across to my other drive. In any case I'll give things a try and let you know the results after lunch.

khiron commented 10 months ago

Interestingly however, despite having WSL2 not WSL1 and not having Hyper-V enabled, I was still able to set up docker and create the environment (I suspect other issues were to do with my C drive running out of space). I'll try again this afternoon.

Before you blow away DockerDesktop wnd while you are still on WSL1 can you get a screenshot for me of settings / Resources / Disk image

This is mine. I have wsl2 and it requires me to use WSL2 configuration which is a pain

image

But with WSL1 they may still allow you to move image location

rmcar17 commented 10 months ago

@khiron Okay, looked into it again. So docker is running on wsl 2 on my machine, whereas my Ubuntu is on wsl 1 which makes quite a bit more sense as to why things work. image

Are you sure you are unable to move the docker disk image location? I was able to just by clicking on browse and making a new folder somewhere and it moved it over. image

rmcar17 commented 10 months ago

Went through the remainder of the installation instructions. Took 600.1 seconds on my laptop to build the docker image.

When I started vscode and attached to the running docker container, vscode automatically started in the /root directory, instead of the /workspace directory (mustn't have found it yesterday), so had to navigate to get where needed.

Once there (other than having to install the vscode extensions) everything worked.

I do think it would be useful to have alternative installation instructions for without docker (though having a consistent environment is definitely a strong benefit, the setup required for the average user is quite complex). I'm not sure if more is going to go into the Dockerfile, but is the main purpose just to set up the python environment? If so, it may be useful to provide a conda environment.yml file for an alternative installation. From there, assuming they have conda installed, setup should just be one command. Not sure if vscode setup is necessary as well, most things could just be done from jupyter lab for the workshop (I imagine).

khiron commented 10 months ago

I have a build pending a pull request that simplifies it right down to

I'm working on updated docs and a video over the weekend. It's something that if docker desktop (or docker daemon) is already installed, it will take about 10 mins to get working.

I do think it would be useful to have alternative installation instructions for without docker

For developers I agree. But docker isn't really a core dev thing, even tho I personally do use containers all the time.

But for a workshop it means if they have docker desktop installed we don't have to worry about heterogeneous hardware with issues showing up.