remicres / otbtf

Deep learning with otb (mirror of https://forgemia.inra.fr/orfeo-toolbox/otbtf)
Apache License 2.0
161 stars 39 forks source link

Documentation to use docker #35

Open remicres opened 3 years ago

remicres commented 3 years ago

More documentation related to the use of docker should ease greatly user experience! Here is a list of stuff that could be very useful to put in the documentation:

There are some resources online like here but it could be nice to have it all in the repository!

viniciuspg commented 3 years ago

Hi Remi

I wrote a message on Researchgate because I couldn't find a way to contact you here. Well, I would like to know if it is already available a way to use OTBTF on win10, and if there is any tutorial that I can follow because unfortunately I can't install dual boot on my notebook (acer predator), because I run the risk of losing windows. As I use the computer for regular work, I can't take this risk. In this sense, it would be very good if I could install via docker, since I already use ODM this way. Would this be possible?

daspk04 commented 3 years ago

Hello @viniciuspg,

The OTBTF can be installed via docker on win10 via Docker Desktop. You can install the CPU version on windows 10 via Docker. Just docker pull any OTBTF (CPU) image from (https://gitlab.com/latelescop/docker/otbtf/container_registry/) and it should work. For example docker pull mdl4eo/otbtf2.4:cpu.

At present, I am using OTBTF with GPU in windows 10 using WSL2. In case you want to use WSL2 with CUDA (GPU enabled) you need to have NVIDIA GPU(which you already have I guess based on your notebook) and have to install the latest Windows Insider version from the Dev Preview ring. https://blogs.windows.com/windows-insider/2020/06/15/introducing-windows-insider-channels/

How to install WSL2 with Cuda on windows 10 https://docs.nvidia.com/cuda/wsl-user-guide/index.html https://docs.docker.com/docker-for-windows/wsl/#gpu-support

viniciuspg commented 3 years ago

Hi Mr. Remis,

Yes, indeed the acer predator helios 300 notebook has a GPU (RTX 2060). I managed to install the latest version of Windows Insider: Windows 10 Home Single Language Edition Dev Version Installed on 04/18/2021 OS Build 21359.1 Windows 10 Feature Experience Pack 321.7601.0.3

I was able to enable WSL2 on Docker, I installed ubuntu DISTRIB_RELEASE=20.04.

I tried to follow the NVIDIA CUDA Installation Guide for Linux ( https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html) and got up to step 2.3 Verify the System Has gcc Installed $ gcc --version gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0

But in step 2.6. Download the NVIDIA CUDA Toolkit, I saw that the version that is listed for windows10 at https://developer.nvidia.com/cuda-downloads. is cuda_11.3.0_465.89_win10.exe. However, last month, under the guidance of Peter Ests, developer of Nenetic, I had already installed the versions below, via an executable installer he created for me to test his software:

CUDA toolkit (Version 11.0.3_451.82) Cudnn for CUDA 11.0 (Version 8.0.4.30)

So I was wondering if the environment was already CUDA-ready or if I still had to install the latest versions of these.

Anyway, I also could not figure out how to run what you described as "Just docker pull any OTBTF (CPU/GPU) image from...". The only thing I have installed in docker is OpenDroneMap, following some video tutorials, and the installation is done through GIT, by typing cd WebODM and then ./webodm.sh start. That's all. So I was in doubt if to install OTBTF I should type docker pull mdl4eo/otbtf2.4:cpu in GIT, in the Windows terminal, or in PowerShell, or even in the Ubuntu terminal. I really couldn't figure out how to do this part.

Sorry for the long email, but I had to describe it a little to make it possible for you to understand what I did, so that you can assess whether it is feasible to continue, given my level of knowledge.

So if it is possible to take advantage of the GPU environment I already have installed, or if it is just a matter of installing cuda_11.3.0_465.89_win10.exe, please let me know. If you think it is too complicated for me to use GPU, I would like to at least test a stream of the OTB TF for classification, even if only with CPU.

So, if possible, please tell me how to run "docker pull any OTBTF (CPU/GPU) image from...".

Thanks a lot in advance! Vinicius

Em sáb., 17 de abr. de 2021 às 07:48, Pratyush Das @.***> escreveu:

Hello @viniciuspg https://github.com/viniciuspg,

The OTBTF can be installed via docker on win10 via Docker Desktop. You can install the CPU version on windows 10 via Docker. Just docker pull any OTBTF (CPU) image from ( https://gitlab.com/latelescop/docker/otbtf/container_registry/) and it should work. For example docker pull mdl4eo/otbtf2.4:cpu.

At present, I am using OTBTF with GPU in windows 10 using WSL2. In case you want to use WSL2 with CUDA (GPU enabled) you need to have NVIDIA GPU(which you already have I guess based on your notebook) and have to install the latest Windows Insider version from the Dev Preview ring. https://blogs.windows.com/windows-insider/2020/06/15/introducing-windows-insider-channels/ http://url

How to install WSL2 with Cuda on windows 10 https://docs.nvidia.com/cuda/wsl-user-guide/index.html http://url https://docs.docker.com/docker-for-windows/wsl/#gpu-support http://url

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/remicres/otbtf/issues/35#issuecomment-821804118, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHYF2YQ23TPYOVAGYOPLRV3TJFRORANCNFSM4UBM4ADA .

-- Vinicius

remicres commented 3 years ago

Hi @viniciuspg ,

From what I understand, you should have installed the CUDA drivers for WSL (downloaded from here), in Windows. You should also have installed successfully WSL2 in windows, and Ubuntu 20.04.

So now, you run Ubuntu. The Ubuntu command prompt appears.

Is that working for you?

viniciuspg commented 3 years ago

Hi there!

yeah, it helped a lot, however it seems to me that instead of installing the CUDA Toolkit for Ubuntu, I believe I should install according to the instruction for WSL as said in 3.7 below (These instructions must be used if you are installing in a WSL environment. Do not use the Ubuntu instructions in this case.

https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

[image: image.png]

Att, Vinicius

Em dom., 18 de abr. de 2021 às 17:42, Rémi Cresson @.***> escreveu:

Hi @viniciuspg https://github.com/viniciuspg ,

From what I understand, you should have installed the CUDA drivers for WSL (downloaded from here https://developer.nvidia.com/cuda/wsl), in Windows. You should also have installed successfully WSL2 in windows, and Ubuntu 20.04.

So now, you run Ubuntu. The Ubuntu command prompt appears.

Is that working for you?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/remicres/otbtf/issues/35#issuecomment-822057834, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHYF2YSJDZWXNWXCZPV6JKDTJM72LANCNFSM4UBM4ADA .

-- Vinicius

viniciuspg commented 3 years ago

Good,

unfortunately I don't think I will be able to install it. I tried to follow the steps in this video https://www.youtube.com/watch?v=6oc8yoybvQQ, but I couldn't get past the "apt-get update" step. The error appears after line 9 (figure below).

In fact if it wasn't for the video I wouldn't get past the first command line, because I wouldn't guess that I have to type "sudo" before apt-key. Unfortunately, I am a biologist, not a programmer, so I can only execute if I can copy and paste each command exactly. Nvidia should make a more complete instruction, not in that way that we have to guess parts. Just as the computer has no way of guessing a command that is not typed, I have no way of knowing if it is not written. Too bad, and if it wasn't for your help Remi, I wouldn't have even made it this far.

From where I stand, I believe you can still try with CPU only, going directly to the last step "in the Ubuntu command prompt, you will be able to use the OTBTF docker ...". In that case the command would be "pull mdl4eo/otbtf2.4:cpu"?

Att, Vinicius

[image: image.png]

Em dom., 18 de abr. de 2021 às 19:50, Vinicius PG @.***> escreveu:

Hi there!

yeah, it helped a lot, however it seems to me that instead of installing the CUDA Toolkit for Ubuntu, I believe I should install according to the instruction for WSL as said in 3.7 below (These instructions must be used if you are installing in a WSL environment. Do not use the Ubuntu instructions in this case.

https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

[image: image.png]

Att, Vinicius

Em dom., 18 de abr. de 2021 às 17:42, Rémi Cresson < @.***> escreveu:

Hi @viniciuspg https://github.com/viniciuspg ,

From what I understand, you should have installed the CUDA drivers for WSL (downloaded from here https://developer.nvidia.com/cuda/wsl), in Windows. You should also have installed successfully WSL2 in windows, and Ubuntu 20.04.

So now, you run Ubuntu. The Ubuntu command prompt appears.

Is that working for you?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/remicres/otbtf/issues/35#issuecomment-822057834, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHYF2YSJDZWXNWXCZPV6JKDTJM72LANCNFSM4UBM4ADA .

-- Vinicius

-- Vinicius

remicres commented 3 years ago

Hi @viniciuspg ,

I think that to use the CPU only is more straightforward, and as @Pratyush1991 mentioned you can do that using the Docker Desktop.

I am sorry if my instructions weren't right. We need some "bulletproof" instructions to use OTBTF docker image + WSL2 + NVIDIA GPU, and I am sure we will have that soon in the documentation section. The docker/wsl2/nvidia thing looks evolving greatly in Windows, but unfortunately it looks like some issues are still remaining ahead...

You can give a try with CPU (it should be ok to begin with the first tutorials), and in the meantime keep an eye open here, where we will add some documentation for windows soon!

vidlb commented 3 years ago

@remicres I just sent you an email, there is something wrong with your r2.4 build on dockerhub, no way to import tf (core dump)...

remicres commented 3 years ago

:scream:

I think that I will push your docker images on dockerhub...

vidlb commented 3 years ago

I guess you should I you don't want to rebuild TF once again... Why did you build with branch r2.4 instead of default v2.4.1 ? It's probably an unstable branch they're using to dev...

Also, I could send you the content of my bazel remote cache dir via sftp or something (<10Go when compressed), so you can rebuild 2.4.1 in a matter of 10mn !

remicres commented 3 years ago

I think that I pushed the wrong images...

My local image all work fine, with tf 2.4.1. I will try to re-push the image on dockerhub

vidlb commented 3 years ago

Are you sure about 2.4.1 ? Because on Saturday you sent me this command by email : docker build --network='host' -t otbtf:cpu --build-arg BASE_IMG=ubuntu:20.04 --build-arg TF=r2.4 --build-arg NUMPY_SPEC===1.17.4 .

remicres commented 3 years ago
cresson@cin-mo-gpu:~/decloud$ docker run -ti mdl4eo/otbtf2.4:cpu bash -c "python -c 'import tensorflow; print(tensorflow.__version__)'"
2.4.1

I don't understand why the image I push on dockerhub is not the right one!

vidlb commented 3 years ago

Indeed, for me this command prints nothing ! May be you messed up your tags ?

remicres commented 3 years ago

Weird. It looks like I pushed the good images. But I am unable to use them on my (intel) laptop, even though they are running fine on the gpu servers.

I will push your images on dockerhub for now...

vidlb commented 3 years ago

Interesting, could you show me your lscpu output ? It could be related to bazel --config=opt, may be you should avoid this to make sure your build is portable.

remicres commented 3 years ago

Server 1 (working):

Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  1
Core(s) per socket:  4
Socket(s):           2
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               85
Model name:          Intel(R) Xeon(R) Gold 5122 CPU @ 3.60GHz
Stepping:            4
CPU MHz:             2731.387
BogoMIPS:            7200.00
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
L3 cache:            16896K
NUMA node0 CPU(s):   0,2,4,6
NUMA node1 CPU(s):   1,3,5,7
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d

Server 2 (working):

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                16
On-line CPU(s) list:   0-15
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) W-2145 CPU @ 3.70GHz
Stepping:              4
CPU MHz:               1480.289
CPU max MHz:           4500,0000
CPU min MHz:           1200,0000
BogoMIPS:              7391.83
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              11264K
NUMA node0 CPU(s):     0-15
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single intel_pt ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx avx512f rdseed adx smap clflushopt clwb avx512cd xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d

Laptop (not working):

Architecture :                          x86_64
Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit
Boutisme :                              Little Endian
Address sizes:                          39 bits physical, 48 bits virtual
Processeur(s) :                         8
Liste de processeur(s) en ligne :       0-7
Thread(s) par cœur :                    2
Cœur(s) par socket :                    4
Socket(s) :                             1
Nœud(s) NUMA :                          1
Identifiant constructeur :              GenuineIntel
Famille de processeur :                 6
Modèle :                                158
Nom de modèle :                         Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
Révision :                              9
Vitesse du processeur en MHz :          1540.825
Vitesse maximale du processeur en MHz : 3900,0000
Vitesse minimale du processeur en MHz : 800,0000
BogoMIPS :                              5799.77
Virtualisation :                        VT-x
Cache L1d :                             128 KiB
Cache L1i :                             128 KiB
Cache L2 :                              1 MiB
Cache L3 :                              8 MiB
Nœud NUMA 0 de processeur(s) :          0-7
Vulnerability Itlb multihit:            KVM: Mitigation: Split huge pages
Vulnerability L1tf:                     Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds:                      Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown:                 Mitigation; PTI
Vulnerability Spec store bypass:        Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:               Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:               Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds:                    Mitigation; Microcode
Vulnerability Tsx async abort:          Mitigation; Clear CPU buffers; SMT vulnerable
Drapaux :                               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_ti
                                        mer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
vidlb commented 3 years ago

Strange, there's SSE4, AVX2 support everywhere so it shouldn't be an issue. I tried with your latest push and it seems ok (our OVH server is also Intel Xeon), which one is it ?

I did not build the MKL version !

vidlb commented 3 years ago

Oh I see, may be it is related to AVX512 instruction, which my Intel Xeon does not have, same as your laptop.

remicres commented 3 years ago

Thanks for this analysis... so you were right about bazel optimization flags.

vidlb commented 3 years ago

There's a dedicated package for AVX512, so this would also match with my explantation, it seems this instruction isn't compatible with regular CPUs... https://pypi.org/project/intel-tensorflow-avx512/

remicres commented 3 years ago

So for building docker images that work on most computers, better sticking to basic optimization flags...

vidlb commented 3 years ago

Yes, you need to remove either --config=opt from BZL_CONFIGS (should be enough) or -march=native from CC_OPT_FLAGS in build-env-tf.txt

vidlb commented 3 years ago

OR you could remove -march=native and add regular opt like --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 which should work with any recent hardware... I do believe it's only a problem for AVX512, but SSE4.2 and AVX2 could also be a problem with old CPUs

remicres commented 3 years ago

Thanks.

This was a bit off topic, but clearly deserved some explanation. And one less issue for windows users...

viniciuspg commented 3 years ago

Hi Remi

Thank you very much, once again. So I will try to install via Docker desktop. Below I copy the doker desktop screen where I think I should start.

[image: image.png]

To use the otbtf CPU image should I install via this screen (insert the command lines here in this part of the docker desktop)? And what would be the command sequence? Sorry, but I did not locate ready-made instruction for the installation. I only found the command line instruction for running otbtf via docker, posted by Pratyush1991 at https://github.com/remicres/otbtf/issues/10.

If I am successful, I will make a detailed account of how I did it to run, so that other inexperienced users may be able to install/use otbtf.

Att, Vinicius

Em seg., 19 de abr. de 2021 às 05:06, Rémi Cresson @.***> escreveu:

Hi @viniciuspg https://github.com/viniciuspg ,

I think that to use the CPU only is more straightforward, and as @Pratyush1991 https://github.com/Pratyush1991 mentioned you can do that using the Docker Desktop.

I am sorry if my instructions weren't right. We need some "bulletproof" instructions to use OTBTF docker image + WSL2 + NVIDIA GPU, and I am sure we will have that soon in the documentation section. The docker/wsl2/nvidia thing looks evolving greatly in Windows, but unfortunately it looks like some issues are still remaining ahead...

You can give a try with CPU (it should be ok to begin with the first tutorials), and in the meantime keep an eye open here, where we will add some documentation for windows soon!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/remicres/otbtf/issues/35#issuecomment-822264209, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHYF2YVJVIZNDCGOEBEVJATTJPP7TANCNFSM4UBM4ADA .

-- Vinicius

remicres commented 3 years ago

Hi @viniciuspg ,

I can't see your screen capture :open_mouth:

I never tried docker desktop myself, but it looks like you can use the Windows PowerShell to run docker images (source). Open a PowerShell terminal and type docker run -it mdl4eo/otbtf2.4:cpu bash to enter the docker in interactive mode.

vidlb commented 3 years ago

Hi @viniciuspg , You can use either PowerShell or cmd.exe to execute docker pull, create or run commands. Then I believe the easiest way would be to create a persistent container, which you can access later from the Docker Desktop GUI (it's nicer than CMD...) Commands would be as follow : docker create --name otbtf-cpu --interactive --tty mdl4eo/otbtf2.4:cpu

Then you could start an interactive shell with docker start -i otbtf-cpu

Or just use the GUI in Docker Desktop to start a shell, the container you just created should now appear in the GUI main page !

viniciuspg commented 3 years ago

Sorry Remi,

I replied directly in the email and not here on the forum, so I believe the image did not go up. I uploaded it below, just so you and others can see what it is about. I believe it is now visible.

I will try the power shell approach to see if I can move forward.

Att, Vinicius

docker_desktop_command_line

viniciuspg commented 3 years ago

Thank you very much friends!

With this docker create --name otbtf-cpu --interactive --tty mdl4eo/otbtf2.4:cpu command in Powershell I believe the implementation was accomplished. I believe this will help many other colleagues, because I just copied and pasted this line into Powershell and the magic happened. The OTBTF container appeared in the docker GUI. After that, I just clicked on the play button, and it turned green, meaning that it is running, and then clicking on the container and on the "CLI" icon (see print), a window opened, which apparently is where I should type the commands for using OTBTF, based on what was described by Pratyush1991 at https://github.com/remicres/otbtf/issues/10, correct? I will try with these instructions and as described in Remi's book "Deep Learning for Remote Sensing Images with Open Source Software".

Att, Vinicius 1powershell 2docker_desktop_OTBTF_CREATED 3docker_desktop_OTBTF_RUNNING 4docker_desktop_OTBTF_COMMAND_LINE_WINDOW

remicres commented 3 years ago

That is great!

I will push some documentation in a few minutes.

It could be great to add a section to help windows user. Can you tell me if that is correct:

  1. install docker desktop
  2. open a PowerShell terminal, and type docker create --name otbtf-cpu --interactive --tty mdl4eo/otbtf2.4:cpu
  3. in docker desktop, check that the docker is running in the Container/Apps menu
  4. in docker desktop, click on the icon that you highlighted in your screenshot, and use the bash terminal that pops up

Is that it?

viniciuspg commented 3 years ago

Yes Remi,

it was exactly these steps, according to the screenshots I posted. However, it may be worth explaining the following. My Docker was already installed and enabled with WSL2. I am not sure if this will make any difference, but when I installed Docker, WSL2 was not enabled, and it was necessary to follow the steps described at https://docs.microsoft.com/en-us/windows/wsl/install-win10#manual-installation-steps, especially step 5 to enable WSL2 in Docker.

Att, Vinicius

remicres commented 3 years ago

Great, I will add your screenshots in the documentation, if you allow me.

With WSL2 enabled I feel like you are very close to be able to use your GPU... but lets wait a bit for that :wink: Documentation incoming...

viniciuspg commented 3 years ago

Please use whatever you need, if necessary I can provide more details. Just ask me and I'll be available!

Thank you very much!

remicres commented 3 years ago

@viniciuspg if you success in using OTBTF with GPU enabled on Windows 10, it could be nice to detail precisely the steps. I would add it in the documentation!

viniciuspg commented 3 years ago

Dear Remi,

you can be sure that if I succeed, I will report back. I even took the liberty of sending NVIDIA a request for help. I believe that if they can point us to a developer who can help us, we have a better chance of success.

Att, Vinicius

viniciuspg commented 3 years ago

Hi Remi,

last weekend we were trying at all costs to prepare the environment to enable GPU processing with otbtf on win10, without success, remember? One of the great difficulties was precisely the step described by Nvidia, which seemed to me not to be a complete instruction, although they bring several details. Well, I made a contact with the company through Networking Academy, because as I understood they have a dedicated session for academic projects, as is the case of OTB.

I believe that if OTBTF can become a reality for use, integrated into QGIS, as is OTB itself, we will have a great benefit to the community of users, especially in developing countries as is the case of Brazil, because it can be used not only by experts in programming, but also by professionals in other areas of knowledge, who depend on remote sensing to develop their research.

Well, I was returned with this contact from Shani Berko, who I believe can help in this development.

In this sense, I would like to have your e-mail contact so that I can put you in touch with Shani Berko, so that together you can assess whether this approach is promising. If so, I would be happy to help you test the application as well as help you describe the step-by-step approach. Sorry to open this issue again, but this is the only contact I could get with you. I don't know how to send a private message here, but my e-mail address is viniciuspg@gmail.com. Feel free to contact me there.

Att, Vinicius