MaastrichtU-IDS / dsri-documentation

📖 Documentation for the Data Science Research Infrastructure at Maastricht University
https://dsri.maastrichtuniversity.nl
MIT License
22 stars 8 forks source link

[Contribution] Deep Learning Workflows #11

Closed surajpaib closed 4 years ago

surajpaib commented 4 years ago

Hi Team,

I've been using the OC Pods for a variety of DL related tasks - preprocessing, training and deployment, and have documented some workflows for the rest of our team at Maastro.

One of the first ones is setting up Visual Studio Code server so that we can use a robust and powerful file editor to work with our projects.

I'm issuing a PR to the repo with my additions and documentation. If you think this might be useful to a larger audience beyond the Maastro group, I propose adding this to the existing documentation.

Thanks!

vemonet commented 4 years ago

Hi Suraj, thanks a lot for this contribution!

We accepted the pull request, and we will take a look in making it easier

They have a Docker image that can be used: https://github.com/cdr/code-server/blob/v3.4.1/doc/install.md#docker

We can define a template to create VSCode pods from the web UI

surajpaib commented 4 years ago

The docker image sounds perfect. Let me know if I can help test the template out. Would be a much more elegant solution

vemonet commented 4 years ago

Screenshot from 2020-09-01 15-14-57

You can do it now! It worked on my side, you deploy it on your persistent storage at the path you want I am surprised at how fast it is working! I was expecting it to be a bit slow, but it is actually faster than my personal VSCode

Here is all you need to provide to create it:

Screenshot from 2020-09-01 15-18-08

You should be able to access the template in maastroc-domain-adaptation

vemonet commented 4 years ago

The next step would be to add GPU support to the VisualStudio Code image I guess

Here we have 2 options:

surajpaib commented 4 years ago

Hi Vincent! Great to see this added so quick! I did follow the first option as you pointed out.

Both options seem good to me. Would there be any significant differences between the two for the end-user?

vemonet commented 4 years ago

Going with the Nvidia container would be better, otherwise we would need to find out and install all dependencies for GPU support

I started a template to deploy it from the Web UI, it works with port-forward

But does not work with Routes (to access it directly on the DSRI URLs)

Some notes about this issue with routing port 8080:

This could be due to the host definition in code-server config

surajpaib commented 4 years ago

Sorry for the delayed response @vemonet. Was quite busy with some other projects.

Tested the pytorch VS code images and they look pretty good! I generally use port forwarding so shouldn't be a problem.

I presume there isn't an easy fix to route to DSRI URLs. Is there something I can perhaps play around with in the code-server config ?

surajpaib commented 4 years ago

@vemonet I played around with a few things, using the bind-addr flag or the field in the config and broadcasting over 0.0.0.0 seems to make the routes work in the GPU image.

I added a custom port and created a route to test it out: http://gpu-pytorch-vscode-general-test-maastro-domain-adaptation.app.dsri.unimaas.nl/login?to=%2F

vemonet commented 4 years ago

Hi @surajpaib

I added the bind-addr directly in the Dockerfile to start it properly (even with Docker you need this to avoid providing -net host)

ENTRYPOINT [ "code-server" ]
CMD [ "--bind-addr", "0.0.0.0" ]

But it does no fix the issue, and I cannot access yours neither: http://gpu-pytorch-vscode-general-test-maastro-domain-adaptation.app.dsri.unimaas.nl/login?to=%2F

I also tried with the port, but same issue

ENTRYPOINT [ "code-server" ]
CMD [ "--bind-addr", "0.0.0.0:8080" ]
vemonet commented 4 years ago

Solved starting on 8081