Closed zonexo closed 1 year ago
Hi @zonexo ,
I guess you didn't activate DRLinFLuids python environment correctly, but your own python environment .local/lib/python3.8
.
Could you please check your terminal prefix after enter drl
? The right one should be (DRLinFluids) Singularity>
Besides, please check the output of which python
and see whether is /opt/miniconda3/envs/DRLinFluids/bin/python
.
To solve this problem, you can enter drl
command twice to re-activate the DRLinFluids environment and repeat the above check procedure.
Please let me know if the problem still persist.
Best, Qiulei
Hi Qiulei,
I am using
(DRLinFluids) Singularity>
I run "which python" and I got:
/opt/miniconda3/envs/DRLinFluids/bin/python
So everything seems correct. But I got the same problem. It seems that singularity is still using my own .local python. I'm not sure why.
It's very weird. I test this example on my Ubuntu 20.04 device and cannot reproduce this issue. Could you please tell me what Linux distribution you used?
Hi, when I 1st run singularity with DRLinFluids.sif, I got the welcome msg. no error.
When I run drl
, I got this err:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
I followed the instructions and run conda init bash
There's no more error msg but I wonder if it's related to the problem.
I also tried --no-home when running singularity but I got lots of error when I run conda init bash
:
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "/opt/miniconda3/lib/python3.10/site-packages/conda/exceptions.py", line 1132, in __call__
return func(*args, **kwargs)
File "/opt/miniconda3/lib/python3.10/site-packages/conda/cli/main.py", line 69, in main_subshell
exit_code = do_call(args, p)
File "/opt/miniconda3/lib/python3.10/site-packages/conda/cli/conda_argparse.py", line 122, in do_call
return getattr(module, func_name)(args, parser)
File "/opt/miniconda3/lib/python3.10/site-packages/conda/cli/main_init.py", line 33, in execute
return initialize(context.conda_prefix, selected_shells, for_user, args.system,
File "/opt/miniconda3/lib/python3.10/site-packages/conda/core/initialize.py", line 113, in initialize
run_plan_elevated(plan2)
File "/opt/miniconda3/lib/python3.10/site-packages/conda/core/initialize.py", line 709, in run_plan_elevated
result = subprocess_call(
File "/opt/miniconda3/lib/python3.10/site-packages/conda/gateways/subprocess.py", line 87, in subprocess_call
process = Popen(
File "/opt/miniconda3/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/miniconda3/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'sudo'
`$ /opt/miniconda3/condabin/conda init bash`
environment variables:
ADIOS2_ARCH_PATH=/home/project/11001487/OpenFOAM/ThirdParty-
v2206/platforms/linux64Gcc841/ADIOS2-2.7.1
BOOST_ARCH_PATH=/home/project/11001487/OpenFOAM/ThirdParty-
v2206/platforms/linux64Gcc841/boost_1_74_0
CGAL_ARCH_PATH=/home/project/11001487/OpenFOAM/ThirdParty-
v2206/platforms/linux64Gcc841/CGAL-4.14.3
CIO_TEST=<not set>
CONDA_EXE=/opt/miniconda3/bin/conda
CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
CONDA_ROOT=/opt/miniconda3
CONDA_SHLVL=0
CRAYPAT_LD_LIBRARY_PATH=/opt/cray/pe/gcc-libs:/opt/cray/gcc-
libs:/opt/cray/pe/perftools/22.04.0/lib64
CRAY_LD_LIBRARY_PATH=/opt/cray/pe/libsci/21.08.1.2/CRAY/9.0/x86_64/lib:/opt/cray/pe/mpich/8
.1.15/ofi/cray/10.0/lib:/opt/cray/pe/mpich/8.1.15/gtl/lib:/opt/cray/pe
/dsmml/0.2.2/dsmml/lib:/opt/cray/pe/cce/13.0.2/cce-clang/x86_64/lib:/o
pt/cray/pe/cce/13.0.2/cce/x86_64/lib:/opt/cray/pe/perftools/22.04.0/li
b64
CURL_CA_BUNDLE=<not set>
FFTW_ARCH_PATH=/home/project/11001487/OpenFOAM/ThirdParty-
v2206/platforms/linux64Gcc841/fftw-3.3.10
FPATH=:/opt/cray/pe/modules/3.2.11.6/init/sh_funcs/no_redirect
LD_LIBRARY_PATH=/.singularity.d/libs
LD_PRELOAD=<not set>
MANPATH=/app/apps/singularity/3.10.0/share/man:/opt/cray/pe/pals/1.1.6/man:/op
t/cray/pe/libsci/21.08.1.2/man:/opt/cray/pe/man/csmlversion:/opt/cray/
pe/mpich/8.1.15/ofi/man:/opt/cray/pe/mpich/8.1.15/man/mpich:/opt/cray/
pe/dsmml/0.2.2/dsmml/man/:/opt/cray/pe/craype/2.7.15/man:/opt/cray/pe/
cce/13.0.2/cce-clang/x86_64/share/man:/opt/cray/pe/cce/13.0.2/man:/opt
/cray/pe/perftools/22.04.0/man:/opt/cray/pe/papi/6.0.0.14/share/pdoc/m
an:/opt/cray/libfabric/1.11.0.4.125/share/man:/opt/cray/pe/modules/3.2
.11.6/share/man:/opt/c3/man:/opt/pbs/share/man:/opt/clmgr/man:/opt/sgi
/share/man:/opt/clmgr/share/man:/opt/clmgr/lib/cm-
cli/man:/opt/cray/pe/man:
MODULEPATH=/opt/cray/pe/perftools/22.04.0/modulefiles:/opt/cray/pe/modulefiles:/o
pt/cray/modulefiles:/opt/modulefiles:/opt/cray/pe/craype-targets/defau
lt/modulefiles:/app/apps/modulefiles:/app/libs/modulefiles:/home/users
/nus/tsltaywb/modules
MPI_ARCH_PATH=/opt/cray/pe/mpich/8.1.15/ofi/cray/10.0
NLSPATH=/opt/cray/pe/cce/13.0.2/cce/x86_64/share/nls/En/%N.cat
PATH=/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr
/bin:/sbin:/bin
PE_AOCC_DEFAULT_FIXED_PKGCONFIG_PATH=/opt/cray/pe/parallel-
netcdf/1.12.2.1/AOCC/3.0/lib/pkgconfig:/opt/cray/pe/netcdf-hdf5paralle
l/4.8.1.1/AOCC/3.0/lib/pkgconfig:/opt/cray/pe/netcdf/4.8.1.1/AOCC/3.0/
lib/pkgconfig:/opt/cray/pe/hdf5-
parallel/1.12.1.1/AOCC/3.0/lib/pkgconfig:/opt/cray/pe/hdf5/1.12.1.1/AO
CC/3.0/lib/pkgconfig
PE_CRAYCLANG_DEFAULT_FIXED_PKGCONFIG_PATH=/opt/cray/pe/parallel-
netcdf/1.12.2.1/CRAYCLANG/10.0/lib/pkgconfig:/opt/cray/pe/netcdf-hdf5p
arallel/4.8.1.1/CRAYCLANG/10.0/lib/pkgconfig:/opt/cray/pe/netcdf/4.8.1
.1/CRAYCLANG/10.0/lib/pkgconfig:/opt/cray/pe/hdf5-
parallel/1.12.1.1/CRAYCLANG/10.0/lib/pkgconfig:/opt/cray/pe/hdf5/1.12.
1.1/CRAYCLANG/10.0/lib/pkgconfig
PE_CRAYCLANG_FIXED_PKGCONFIG_PATH=/opt/cray/pe/libsci/21.08.1.2/CRAY/90/x86_64/lib/pkgconfig
PE_DSMML_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/dsmml/0.2.2/dsmml/dsmml/lib/pkgconfig
PE_DSMML_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/dsmml/0.2.2/dsmml/lib/pkgconfig
PE_FFTW_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/fftw/3.3.8.13/@PE_FFTW_DEFAULT_TARGET@/lib/pkgconfig
PE_GNU_DEFAULT_FIXED_PKGCONFIG_PATH=/opt/cray/pe/parallel-
netcdf/1.12.2.1/GNU/9.1/lib/pkgconfig:/opt/cray/pe/netcdf-hdf5parallel
/4.8.1.1/GNU/9.1/lib/pkgconfig:/opt/cray/pe/netcdf/4.8.1.1/GNU/9.1/lib
/pkgconfig:/opt/cray/pe/hdf5-
parallel/1.12.1.1/GNU/9.1/lib/pkgconfig:/opt/cray/pe/hdf5/1.12.1.1/GNU
/9.1/lib/pkgconfig
PE_INTEL_DEFAULT_FIXED_PKGCONFIG_PATH=/opt/cray/pe/parallel-
netcdf/1.12.2.1/INTEL/19.0/lib/pkgconfig:/opt/cray/pe/netcdf-hdf5paral
lel/4.8.1.1/INTEL/19.0/lib/pkgconfig:/opt/cray/pe/netcdf/4.8.1.1/INTEL
/19.0/lib/pkgconfig:/opt/cray/pe/mpich/8.1.15/ofi/intel/19.0/lib/pkgco
nfig:/opt/cray/pe/hdf5-
parallel/1.12.1.1/INTEL/19.0/lib/pkgconfig:/opt/cray/pe/hdf5/1.12.1.1/
INTEL/19.0/lib/pkgconfig
PE_INTEL_FIXED_PKGCONFIG_PATH=/opt/cray/pe/mpich/8.1.15/ofi/intel/19.0/lib/pkgconfig
PE_LIBSCI_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/libsci/21.08.1.2/@PRGENV@/@PE_LIBSCI_DEFAULT_GENCOMPS@/@P
E_LIBSCI_DEFAULT_TARGET@/lib/pkgconfig
PE_LIBSCI_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/libsci/21.08.1.2/@PRGENV@/@PE_LIBSCI_GENCOMPS@/@PE_LIBSCI
_TARGET@/lib/pkgconfig
PE_MPICH_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/mpich/8.1.15/ofi/@PRGENV@/@PE_MPICH_DEFAULT_GENCOMPS@/lib
/pkgconfig:/opt/cray/pe/mpich/8.1.15/gtl/lib/pkgconfig
PE_MPICH_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/mpich/8.1.15/ofi/@PRGENV@/@PE_MPICH_GENCOMPS@/lib/pkgconf
ig:/opt/cray/pe/mpich/8.1.15/gtl/lib/pkgconfig
PE_PKG_CONFIG_PATH=/opt/cray/pe/valgrind4hpc/2.12.7/lib/pkgconfig:/opt/cray/pe/cti/2.15.1
0/lib/pkgconfig:/opt/cray/pe/atp/3.14.10/lib/pkgconfig
PE_SMA_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/sma/11.5.3.beta/ofi/sma@PE_SMA_DEFAULT_DIR_DEFAULT64@/lib
64/pkgconfig
PKG_CONFIG_PATH=/opt/cray/pe/craype/2.7.15/pkg-config:/opt/cray/libfabric/1.11.0.4.125
/lib64/pkgconfig:/opt/cray/pe/atp/3.14.10/lib/pkgconfig
REQUESTS_CA_BUNDLE=<not set>
SCOTCH_ARCH_PATH=/home/project/11001487/OpenFOAM/ThirdParty-
v2206/platforms/linux64Gcc841DPInt32/scotch_6.1.0
SSL_CERT_FILE=<not set>
USER_PATH=/app/apps/singularity/3.10.0/bin:/opt/miniconda3/bin:/home/users/nus/t
sltaywb/ai/Siemens/18.02.008-R8/STAR-CCM+18.02.008-
R8/star/libnsl/:/home/users/nus/tsltaywb/.local/bin:/home/users/nus/ts
ltaywb/shortcuts:/home/users/nus/tsltaywb/OpenFOAM/tsltaywb-
v2206/platforms/linux64Gcc841DPInt32Opt/bin:/home/project/11001487/Ope
nFOAM/OpenFOAM-
v2206/site/2206/platforms/linux64Gcc841DPInt32Opt/bin:/home/project/11
001487/OpenFOAM/OpenFOAM-
v2206/platforms/linux64Gcc841DPInt32Opt/bin:/home/project/11001487/Ope
nFOAM/OpenFOAM-v2206/bin:/home/project/11001487/OpenFOAM/OpenFOAM-
v2206/wmake:/opt/cray/pe/pals/1.1.6/bin:/opt/cray/pe/craype/2.7.15/bin
:/opt/cray/pe/cce/13.0.2/binutils/x86_64/x86_64-pc-linux-gnu/bin:/opt/
cray/pe/cce/13.0.2/binutils/cross/x86_64-aarch64/aarch64-linux-gnu/../
bin:/opt/cray/pe/cce/13.0.2/utils/x86_64/bin:/opt/cray/pe/cce/13.0.2/b
in:/opt/cray/pe/perftools/22.04.0/bin:/opt/cray/pe/papi/6.0.0.14/bin:/
opt/cray/libfabric/1.11.0.4.125/bin:/opt/clmgr/sbin:/opt/clmgr/bin:/op
t/sgi/sbin:/opt/sgi/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/s
bin:/app/apps/local/bin:/opt/c3/bin:/usr/lpp/mmfs/bin:/opt/pbs/bin:/sb
in:/bin:/opt/cray/pe/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:
/usr/local/sbin
active environment : None
shell level : 0
user config file : /home/users/nus/tsltaywb/.condarc
populated config files :
conda version : 23.3.1
conda-build version : not installed
python version : 3.10.10.final.0
virtual packages : __archspec=1=x86_64
__glibc=2.31=0
__linux=4.18.0=0
__unix=0=0
base environment : /opt/miniconda3 (read only)
conda av data dir : /opt/miniconda3/etc/conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /opt/miniconda3/pkgs
/home/users/nus/tsltaywb/.conda/pkgs
envs directories : /home/users/nus/tsltaywb/.conda/envs
/opt/miniconda3/envs
platform : linux-64
user-agent : conda/23.3.1 requests/2.28.1 CPython/3.10.10 Linux/4.18.0-305.25.1.el8_4.x86_64 ubuntu/20.04.6 glibc/2.31
UID:GID : 20822:1002
netrc file : None
offline mode : False
An unexpected error has occurred. Conda has prepared the above report.
If submitted, this report will be used by core maintainers to improve
future releases of conda.
Would you like conda to send this report to the core maintainers? [y/N]:
Btw, I am running Red Hat Enterprise Linux 8 cluster.
Hi @zonexo ,
I'm not an expert in Singularity container technology, but I suspect that this issue may be related to root privileges.
Generally, using the root account to enter container or activate the drl
environment can result in conda CommandNotFoundError
.
In my personal experience, using HPC/CLUSTER systems can introduce some permission issues compared to desktop Linux systems, which may require further investigation. Changing to a different machine may help resolve some problems.
Anyway, I may not be able to provide further answers to your environment-related questions, and will close this issue but feel free to reopen it once you have further comments. Thank you very much for your feedback. : )
Maybe it can indicate that is would be useful to update / push a newer container, making sure absolutely all needed dependencies are a part of it? Just curious (I had looked into this a long time ago, I do not really remember), what kind of container image is used at the moment? An option could be to set up and use a singularity sandbox container, that should be 100% isolated and reproducible. I am using this on several projects with complex dependencies structures now, and this works very well (for example, the dependencies structure of https://github.com/KTH-FlowAI/DeepReinforcementLearning_RayleighBenard2D_Control is a nightmare, but using a singularity sandbox as provided everything has been working very smoothly). If this can be of interest, here is the "self notes" I took about it, feel free to ask for more information there:
https://github.com/jerabaul29/guidelines_workflow_project
Let me know if I should move this to a new issue :) .
Hi @jerabaul29 ,
Yes, we use Singularity to build DRLinFluids image currently. But I still cann't exactly explain the issue mentioned in this thread, espacially for the conda environment, since I cannot reproduce it on my own computer.
Very appreciate for your suggestions. Next, I will try to rebuild DRLinFluids image according to the notes and see if the problem can be solved.
Let me reopen the issue and comment here once finished this work.
Sounds excellent. I think I had given a try at the singularity container used now a long time ago, and that I had got it to work but after a few issues. Let me know when you have the "re-built container as a sandbox", and I can try again. I should have a bit more time these days to help with the debugging :) .
I wonder if the singularity sandbox cannot really be shared through the singularity hub - if you can provide the sandbox as a .tar (possibly in segments if necessary) hosted on any file sharing server, that could be the simplest. Let me know when this is available, and I will give it a try :) .
Thank you @jerabaul29 and @venturi123 !
Hi, I have checked with my cluster's tech experts. They suggested:
singularity shell DRLinFluids.sif
. /opt/miniconda3/etc/profile.d/conda.sh
conda activate DRLinFluids
It seems to be working. However, I faced another issue. I will report them as a new case.
Hi All,
I've finished to repack our DRLinFluids using singularity sandbox.
Here is the Google drive link of the image for your information.
Besides, I tried to resolve the "no protocol specified" warning caused by OpenMPI, which seems to be related to NVIDIA displays and not to DRL and OpenFOAM. So, I suppressed this warning accoridng to this thread.
I also noticed that logging into Singularity with the Root account conflicts with OpenMPI. A warning regarding OpenMPI is raised as follow:
mpirun has detected an attempt to run as root.
Running as root is *strongly* discouraged as any mistake (e.g., in
defining TMPDIR) or bug can result in catastrophic damage to the OS
file system, leaving your system in an unusable state.
We strongly suggest that you run mpirun as a non-root user.
You can override this protection by adding the --allow-run-as-root option
to the cmd line or by setting two environment variables in the following way:
the variable OMPI_ALLOW_RUN_AS_ROOT=1 to indicate the desire to override this
protection, and OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1 to confirm the choice and
add one more layer of certainty that you want to do so.
We reiterate our advice against doing so - please proceed at your own risk.
Therefore, I am considering using a regular account to log in to Singularity, which would be a better choice and would not require the use of the fakeroot flag.
Many thanks for your assistance. : )
Amazing, many thanks! Sounds good, I will test it as soon as I can, I let you know :) .
I do not think (but I may be wrong) that the warning is as bad as it sounds - since everything is happening inside a singularity sandbox, the only filesystem that openmpi can modify is the filesystem within the sandbox itself, not the full host. So, given this isolation, it should not be possible to damage the host, only the content of the specific sandbox, which can be re-spin from the image as many times as necessary. So this warning should not be too bad. This comes without warranty though :) .
That looks very good, I was able to assemble the sandbox and run it. A very minor point: can you consider adding to the checsums list the checksum for the full .tar.gz reconstructe file? (should be something like af77e59fadba56a7e115929ae1400ecb4612aa0f936bad25efe40b4c571d2e74
if I am correct :) , note this may change if you update the sandbox).
It looks like I am hitting a minor issue trying to run the example within the new sandbox; have you hit it / do you know how to solve it?
~/Downloads/singularity_sandbox_DRLinFluids> singularity shell --writable --fakeroot --no-home DRLinFluids
Welcome to DRLinFluids! (SIF Version: 0.1.0)
For more details, please refer to our Github repository:
https://github.com/venturi123/DRLinFluids
To toggle DRLinFluids Python environment, please enter (repeatedly):
>>> drl
To activate OpenFOAM8 environment:
>>> of8
Singularity> cd ../home/
Singularity> git clone https://github.com/venturi123/DRLinFluids.git
Cloning into 'DRLinFluids'...
remote: Enumerating objects: 316, done.
remote: Counting objects: 100% (316/316), done.
remote: Compressing objects: 100% (215/215), done.
remote: Total 316 (delta 124), reused 256 (delta 83), pack-reused 0
Receiving objects: 100% (316/316), 7.81 MiB | 6.11 MiB/s, done.
Resolving deltas: 100% (124/124), done.
Singularity> cd DRLinFluids/examples/
Singularity> drl
(DRLinFluids) Singularity> of8
(DRLinFluids) Singularity> cd cylinder2D_multiprocessing
(DRLinFluids) Singularity> python DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py
--------------------------------------------------------------------------
mpirun has detected an attempt to run as root.
Running as root is *strongly* discouraged as any mistake (e.g., in
defining TMPDIR) or bug can result in catastrophic damage to the OS
file system, leaving your system in an unusable state.
We strongly suggest that you run mpirun as a non-root user.
You can override this protection by adding the --allow-run-as-root option
to the cmd line or by setting two environment variables in the following way:
the variable OMPI_ALLOW_RUN_AS_ROOT=1 to indicate the desire to override this
protection, and OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1 to confirm the choice and
add one more layer of certainty that you want to do so.
We reiterate our advice against doing so - please proceed at your own risk.
--------------------------------------------------------------------------
Traceback (most recent call last):
File "DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py", line 89, in <module>
env = envobject_cylinder.FlowAroundCylinder2D(
File "/home/DRLinFluids/examples/cylinder2D_multiprocessing/DRLinFluids_cylinder/DRLinFluids/environment_tensorforce.py", line 114, in __init__
cfd.run_init(foam_root_path, foam_params)
File "/home/DRLinFluids/examples/cylinder2D_multiprocessing/DRLinFluids_cylinder/DRLinFluids/utils.py", line 414, in wrapper
func(*args, **kwargs)
File "/home/DRLinFluids/examples/cylinder2D_multiprocessing/DRLinFluids_cylinder/DRLinFluids/cfd.py", line 144, in run_init
subprocess.run(
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'cd /home/DRLinFluids/examples/cylinder2D_multiprocessing/env01 && . /opt/openfoam8/etc/bashrc && reconstructPar > /dev/null' returned non-zero exit status 1.
I can also try to recompile but I seem to be getting the same issue:
(DRLinFluids) Singularity> cd ..
(DRLinFluids) Singularity> ls
cylinder2D_multiprocessing newbc square2D_VIV_multiprocessing square2D_multiprocessing
(DRLinFluids) Singularity> cd newbc/
(DRLinFluids) Singularity> ./wmakeall
wmakeLnInclude: linking include files to ./lnInclude
Making dependency list for source file incrementallinearJetUniVelocityFvPatchVectorField.C
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -c incrementallinearJetUniVelocityFvPatchVectorField.C -o Make/linux64GccDPInt32Opt/incrementallinearJetUniVelocityFvPatchVectorField.o
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -fuse-ld=bfd -shared -Xlinker --add-needed -Xlinker --no-as-needed Make/linux64GccDPInt32Opt/incrementallinearJetUniVelocityFvPatchVectorField.o -L/opt/openfoam8/platforms/linux64GccDPInt32Opt/lib \
-lfiniteVolume -lmeshTools -o /opt/openfoam8/platforms/linux64GccDPInt32Opt/lib/libincrementallinearJetUniVelocity.so
wmakeLnInclude: linking include files to ./lnInclude
Making dependency list for source file jetParabolicVelocityFvPatchVectorField.C
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -c jetParabolicVelocityFvPatchVectorField.C -o Make/linux64GccDPInt32Opt/jetParabolicVelocityFvPatchVectorField.o
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -fuse-ld=bfd -shared -Xlinker --add-needed -Xlinker --no-as-needed Make/linux64GccDPInt32Opt/jetParabolicVelocityFvPatchVectorField.o -L/opt/openfoam8/platforms/linux64GccDPInt32Opt/lib \
-lfiniteVolume -lmeshTools -o /opt/openfoam8/platforms/linux64GccDPInt32Opt/lib/libjetParabolicVelocity.so
wmakeLnInclude: linking include files to ./lnInclude
Making dependency list for source file incrementalJetParabolicVelocityFvPatchVectorField.C
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -c incrementalJetParabolicVelocityFvPatchVectorField.C -o Make/linux64GccDPInt32Opt/incrementalJetParabolicVelocityFvPatchVectorField.o
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -fuse-ld=bfd -shared -Xlinker --add-needed -Xlinker --no-as-needed Make/linux64GccDPInt32Opt/incrementalJetParabolicVelocityFvPatchVectorField.o -L/opt/openfoam8/platforms/linux64GccDPInt32Opt/lib \
-lfiniteVolume -lmeshTools -o /opt/openfoam8/platforms/linux64GccDPInt32Opt/lib/libincrementalJetParabolicVelocity.so
wmakeLnInclude: linking include files to ./lnInclude
Making dependency list for source file inletParabolicVelocityFvPatchVectorField.C
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -c inletParabolicVelocityFvPatchVectorField.C -o Make/linux64GccDPInt32Opt/inletParabolicVelocityFvPatchVectorField.o
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -fuse-ld=bfd -shared -Xlinker --add-needed -Xlinker --no-as-needed Make/linux64GccDPInt32Opt/inletParabolicVelocityFvPatchVectorField.o -L/opt/openfoam8/platforms/linux64GccDPInt32Opt/lib \
-lfiniteVolume -lmeshTools -o /opt/openfoam8/platforms/linux64GccDPInt32Opt/lib/libinletParabolicVelocity.so
wmakeLnInclude: linking include files to ./lnInclude
Making dependency list for source file incrementallinearJetParabolicVelocityFvPatchVectorField.C
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -c incrementallinearJetParabolicVelocityFvPatchVectorField.C -o Make/linux64GccDPInt32Opt/incrementallinearJetParabolicVelocityFvPatchVectorField.o
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -fuse-ld=bfd -shared -Xlinker --add-needed -Xlinker --no-as-needed Make/linux64GccDPInt32Opt/incrementallinearJetParabolicVelocityFvPatchVectorField.o -L/opt/openfoam8/platforms/linux64GccDPInt32Opt/lib \
-lfiniteVolume -lmeshTools -o /opt/openfoam8/platforms/linux64GccDPInt32Opt/lib/libincrementallinearJetParabolicVelocity.so
wmakeLnInclude: linking include files to ./lnInclude
Making dependency list for source file incrementalJetUniVelocityFvPatchVectorField.C
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -c incrementalJetUniVelocityFvPatchVectorField.C -o Make/linux64GccDPInt32Opt/incrementalJetUniVelocityFvPatchVectorField.o
g++ -std=c++11 -m64 -Dlinux64 -DWM_ARCH_OPTION=64 -DWM_DP -DWM_LABEL_SIZE=32 -Wall -Wextra -Wold-style-cast -Wnon-virtual-dtor -Wno-unused-parameter -Wno-invalid-offsetof -Wno-attributes -O3 -DNoRepository -ftemplate-depth-100 -I/opt/openfoam8/src/finiteVolume/lnInclude -I/opt/openfoam8/src/meshTools/lnInclude -IlnInclude -I. -I/opt/openfoam8/src/OpenFOAM/lnInclude -I/opt/openfoam8/src/OSspecific/POSIX/lnInclude -fPIC -fuse-ld=bfd -shared -Xlinker --add-needed -Xlinker --no-as-needed Make/linux64GccDPInt32Opt/incrementalJetUniVelocityFvPatchVectorField.o -L/opt/openfoam8/platforms/linux64GccDPInt32Opt/lib \
-lfiniteVolume -lmeshTools -o /opt/openfoam8/platforms/linux64GccDPInt32Opt/lib/libincrementalJetUniVelocity.so
(DRLinFluids) Singularity> cd ../cylinder2D_multiprocessing/
(DRLinFluids) Singularity> python DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py
--------------------------------------------------------------------------
mpirun has detected an attempt to run as root.
Running as root is *strongly* discouraged as any mistake (e.g., in
defining TMPDIR) or bug can result in catastrophic damage to the OS
file system, leaving your system in an unusable state.
We strongly suggest that you run mpirun as a non-root user.
You can override this protection by adding the --allow-run-as-root option
to the cmd line or by setting two environment variables in the following way:
the variable OMPI_ALLOW_RUN_AS_ROOT=1 to indicate the desire to override this
protection, and OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1 to confirm the choice and
add one more layer of certainty that you want to do so.
We reiterate our advice against doing so - please proceed at your own risk.
--------------------------------------------------------------------------
Traceback (most recent call last):
File "DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py", line 89, in <module>
env = envobject_cylinder.FlowAroundCylinder2D(
File "/home/DRLinFluids/examples/cylinder2D_multiprocessing/DRLinFluids_cylinder/DRLinFluids/environment_tensorforce.py", line 114, in __init__
cfd.run_init(foam_root_path, foam_params)
File "/home/DRLinFluids/examples/cylinder2D_multiprocessing/DRLinFluids_cylinder/DRLinFluids/utils.py", line 414, in wrapper
func(*args, **kwargs)
File "/home/DRLinFluids/examples/cylinder2D_multiprocessing/DRLinFluids_cylinder/DRLinFluids/cfd.py", line 144, in run_init
subprocess.run(
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'cd /home/DRLinFluids/examples/cylinder2D_multiprocessing/env01 && . /opt/openfoam8/etc/bashrc && reconstructPar > /dev/null' returned non-zero exit status 1.
This is likely a quite minor point, I think this is likely very close to working :) .
Hi @jerabaul29 ,
I found the reason for this problem you mentioned, which is due to the MPI running by the root user.
The decomposer
and mpirun
commands will be used to generate an initial state of DRL iteration. The MPI command fails default due to an attempt to run as root.
So, I add the variables OMPI_ALLOW_RUN_AS_ROOT=1
and OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1
to force enable the root user to run it by default.
Now, everything seems to be OK. Could you try version 0.1.1 again with the same Google Drive link?
Besides, I guess a single singularity SIF file might be a better choice for public releasing target (still seek for advice : ). Because
singularity build --fakeroot --sandbox DRLinFluids library://qlwang/main/drlinfluids:latest
in remote mode or singularity build --fakeroot --sandbox DRLinFluids DRLinFluids.sif
in local modesingularity verify DRLinFluids.sif
(I've upload the public GPG key to singularity keystore)Hi @venturi123 ,
many thanks for pointing the sif solution out. Agree it is very nice and neater in theory, but in practice I have regularly hit issues with this. For example, I am hitting the issue similar to https://groups.google.com/a/lbl.gov/g/singularity/c/D0TA3H5jNw0 on my personal machine (which is running a bit old software, but I am not able to update it at the time due to some other reasons) when trying to do singularity build --fakeroot --sandbox DRLinFluids DRLinFluids.sif
. By contrast, un-taring and running on the .tar.gz worked just fine :) . So if you are ok, it can be useful to provide both the .sif
and the .tar.gz
, for users like me who may be stuck on older versions of singularity that introduce regression issues, or if there are other issues, so that such users can fall back on the .tar.gz
(that should always work :) ).
many thanks for pushing the updated container :) . I have been able to run the examples in the sandbox using the v0.1.1, and it worked just fine (I did not run a full training since my machine is quite slow, but I launched it and checked that the DRL learning was well taking place). So I think that you can start recommending this updated methodology and container to your users, it looks to work just fine :) .
Many thanks again for updating this and sharing an updated singularity image, I think that this will be super helpful to make DRLinFluids even easier to use. Let me know when you have pushed updated instructions on the readme, and I can then go through these one more time to confirm one final time that all looks good and works well :) .
Maybe @zonexo that you can give a try at using https://drive.google.com/file/d/1IcLai22W5diuLlXB8RGHivvJD39s2Kr8/view?usp=drive_link too, so that you can give cross confirmation that it works fine? :) The steps I used (guess something like this could be added as an "example of use of singularity" in the readme section about running with the container @venturi123 ? :) ):
~/Downloads/singularity_sandbox_DRLinFluids> ls
DRLinFluids.tar.gz
~/Downloads/singularity_sandbox_DRLinFluids> tar xfp DRLinFluids.tar.gz
~/Downloads/singularity_sandbox_DRLinFluids> singularity shell --writable --fakeroot --no-home DRLinFluids
Welcome to DRLinFluids! (SIF Version: 0.1.1)
For more details, please refer to our Github repository:
https://github.com/venturi123/DRLinFluids
To toggle DRLinFluids Python environment, please enter (repeatedly):
>>> drl
To activate OpenFOAM8 environment:
>>> of8
Singularity> drl
(DRLinFluids) Singularity> of8
(DRLinFluids) Singularity> cd ../home/
(DRLinFluids) Singularity> git clone https://github.com/venturi123/DRLinFluids.git
Cloning into 'DRLinFluids'...
[...]
(DRLinFluids) Singularity> cd DRLinFluids/
(DRLinFluids) Singularity> cd examples/
(DRLinFluids) Singularity> cd newbc/
(DRLinFluids) Singularity> ./wmakeall
wmakeLnInclude: linking include files to ./lnInclude
Making dependency list for source file incrementallinearJetUniVelocityFvPatchVectorField.C
[...]
(DRLinFluids) Singularity> cd ../cylinder2D_multiprocessing/
(DRLinFluids) Singularity> vim DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py
[edit the config since my machine is weak and has very few OpenMPI slots, so running with default config is not possible]
(DRLinFluids) Singularity> git diff
diff --git a/examples/cylinder2D_multiprocessing/DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py b/examples/cylinder2D_multiprocessing/DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py
index 3a3c3ce..79eb0ce 100644
--- a/examples/cylinder2D_multiprocessing/DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py
+++ b/examples/cylinder2D_multiprocessing/DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py
@@ -21,7 +21,7 @@ naction=1
foam_params = {
'delta_t': 0.0005,
'solver': 'pimpleFoam',
- 'num_processor': 5,
+ 'num_processor': 2,
'of_env_init': '. /opt/openfoam8/etc/bashrc',
'cfd_init_time': 0.005,
'num_dimension': 2,
(DRLinFluids) Singularity> python DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py
OpenFOAM_init running time: 2.67 s
OpenFOAM_init running time: 2.39 s
OpenFOAM_init running time: 2.49 s
OpenFOAM_init running time: 2.43 s
OpenFOAM_init running time: 2.43 s
WARNING:root:No min_value bound specified for state.
Agent defined DONE!
Runner defined DONE!
Episodes: 0%| | 0/3000 [00:00, reward=0.00, ts/ep=0, sec/ep=0.00, ms/ts=0.0, agent=0.0%, comm=0.0%]-0.03170674714285582 0.08929953731102042
1 [0] [-0.2193448543548584] -0.12100628445387623
-0.026772295816326785 0.10469439623469388
1 [0] [-0.003052830696105957] -0.13146669205102068
[happily running ever after :) ...]
(I see that the instructions in the readme were updated yesterday, so the fact that this is how things can be run should already be clear to all users with a bit of experience using of / the command line etc :) . So adding something similar to the last terminal output I just posted above is likely a bit redundant; but I think that it can be useful still to include such a "complete example" at the end of the readme including all the steps, all the `cd
and all the commands exactly as it looks like from the terminal, so that users with less experience from previous use can see a full example exactly as it looks like in the terminal - in my experience when teaching to students with different backgrounds, this make it much easier for them to get started :) . But of course this is just my view and I may be wrong :) ).
Hi @jerabaul29 ,
Very happy to hear that and many thanks for your helpful advice. Next, Both sif
and tar
file will be provided on the upcoming update with a completed description.
Then, I guess I could really close the issue now : ) . wish you all have a nice day.
Excellent :) . Let me know if / when the readme gets updated and I can go through it and review it if you want, and check that the sif and tar files are well accessible from my location too :) .
By the way, as DRLinFluids gets updated / get more traction, it could be good to advertise a bit more about it - are you ok if I advertise a bit for DRLinFluids on social networks, such as linkedin and similar? :) .
Hi @jerabaul29 ,
I created a GitHub projects to better track the progress, and will update it there and notify you once these are completed. : )
As for the second suggestion, I will prepare these social networks at the same time and let you know when it is completed.
Also, apologize for any disturbance and inconvenience.
That sounds excellent :) .
Hi,
I just did some tests.
If I use v0.1.1 sif file:
singularity shell /mnt/e/wtay/TL/AI_machine_learning/Singularity/DRLinFluids_v0.1.1.sif
I will get errors while compiling the BC libraries. It complains that I have no write access. When I checked, I realised that all the libs are already there.
However, running:
python DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py
gave these errors:
(DRLinFluids) Singularity> python DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py
/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/framework/dtypes.py:513: FutureWarning: In the future `np.object` will be defined as the corresponding NumPy scalar.
np.object,
Traceback (most recent call last):
File "DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py", line 4, in <module>
from tensorforce import Runner, Agent,Environment
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorforce/__init__.py", line 23, in <module>
from tensorforce.environments import Environment
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorforce/environments/__init__.py", line 16, in <module>
from tensorforce.environments.environment import Environment, RemoteEnvironment
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorforce/environments/environment.py", line 24, in <module>
from tensorforce import TensorforceError, util
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorforce/util.py", line 20, in <module>
import tensorflow as tf
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/__init__.py", line 45, in <module>
from tensorflow.python import data
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/data/__init__.py", line 25, in <module>
from tensorflow.python.data import experimental
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/data/experimental/__init__.py", line 96, in <module>
from tensorflow.python.data.experimental import service
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/data/experimental/service/__init__.py", line 21, in <module>
from tensorflow.python.data.experimental.ops.data_service_ops import distribute
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/data/experimental/ops/data_service_ops.py", line 25, in <module>
from tensorflow.python.data.experimental.ops import compression_ops
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/data/experimental/ops/compression_ops.py", line 20, in <module>
from tensorflow.python.data.util import structure
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/data/util/structure.py", line 26, in <module>
from tensorflow.python.data.util import nest
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/data/util/nest.py", line 41, in <module>
from tensorflow.python.framework import sparse_tensor as _sparse_tensor
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/framework/sparse_tensor.py", line 29, in <module>
from tensorflow.python.framework import constant_op
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 29, in <module>
from tensorflow.python.eager import execute
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 27, in <module>
from tensorflow.python.framework import dtypes
File "/opt/miniconda3/envs/DRLinFluids/lib/python3.8/site-packages/tensorflow/python/framework/dtypes.py", line 513, in <module>
np.object,
File "/home/user/.local/lib/python3.8/site-packages/numpy/__init__.py", line 305, in __getattr__
raise AttributeError(__former_attrs__[attr])
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
I managed to solve the problem by running:
singularity shell --fakeroot --no-home /mnt/e/wtay/TL/AI_machine_learning/Singularity/DRLinFluids_v0.1.1.sif
I have to remove --writable as it complains:
FATAL: no SIF writable overlay partition found in /mnt/e/wtay/TL/AI_machine_learning/Singularity/DRLinFluids_v0.1.1.sif
In this case, the libs still can't compile and overwrite but I managed to start the training.
If I use the tar.gz copy:
tar xfp /mnt/e/wtay/TL/AI_machine_learning/Singularity/DRLinFluids.tar.gz
everything works as mentioned above.
Hi @zonexo ,
Many thanks for your testing. The custom boundary condition newbc
will be installed within the singularity image by default, which means if a re-installation of newbc
is required, the flag --writable
is required. I wil consided to add options to let user decide the installation location by themselves.
For the second question, I notice your working directory is /mnt/e/wtay
, and I wonder if you are working on WSL2 now?
Hi,
I'm having problem running the cylinder example.
I start with singularity:
then to the dir of the cylinder example and run:
python DRLinFluids_cylinder/launch_multiprocessing_traning_cylinder.py
I got the error:
Can you help? Thanks.