Closed gth828r closed 5 years ago
Hm, to be totally transparent, one of the things I did not do when following the steps at https://github.com/msr-fiddle/pipedream/blob/master/EXPERIMENTS.md#updating-docker-container-for-translation is kill the container and create a new image to boot from. I am only running in the single container and I am not using the driver, so I thought I could get away with not having to set up the environment again. Please let me know if that step was somehow critical to this.
You don't need to create a new image if you're not using the driver.
However, you should run these steps to create the seq2seq.pack_utils
module,
cd <directory with pipedream code>/runtime/translation
python setup.py install
Actually, you will need to run the following if you want this to work with the profiler,
cd <directory with pipedream code>/profiler/translation
python setup.py install
Thanks, that setup step in the profiler is what I was missing! However, it looks like the build it tries to do may have an issue in the container environment:
# python setup.py install
running install
running bdist_egg
running egg_info
writing gnmt.egg-info/PKG-INFO
writing dependency_links to gnmt.egg-info/dependency_links.txt
writing requirements to gnmt.egg-info/requires.txt
writing top-level names to gnmt.egg-info/top_level.txt
reading manifest file 'gnmt.egg-info/SOURCES.txt'
writing manifest file 'gnmt.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
building 'seq2seq.pack_utils._C' extension
gcc -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/conda/lib/python3.6/site-packages/torch/include -I/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/include/python3.6m -c seq2seq/csrc/pack_utils.cpp -o build/temp.linux-x86_64-3.6/seq2seq/csrc/pack_utils.o -O2 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from seq2seq/csrc/pack_utils.cpp:4:0:
/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:7:2: warning: #warning "Including torch/torch.h for C++ extensions is deprecated. Please include torch/extension.h" [-Wcpp]
#warning \
^
/usr/local/cuda/bin/nvcc -I/opt/conda/lib/python3.6/site-packages/torch/include -I/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/include/python3.6m -c seq2seq/csrc/pack_utils_kernel.cu -o build/temp.linux-x86_64-3.6/seq2seq/csrc/pack_utils_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --gpu-architecture=sm_70 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -std=c++11
seq2seq/csrc/pack_utils_kernel.cu(55): error: namespace "at::detail" has no member "deprecated_AT_DISPATCH_ALL_TYPES_AND_HALF"
1 error detected in the compilation of "/tmp/tmpxft_000024fa_00000000-6_pack_utils_kernel.cpp1.ii".
error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
Hi @gth828r, can you try the proposed fix in #10? Thanks a lot for your patience!
That fixed the issue! I am now able to run python setup.py install
in the profiler. I still need to sort through issues on my end regarding experiment setup, so I cannot run the profiler yet, but the original issue has been resolved. Thanks for the quick response! As far as I am concerned, you are free to close once you pull the changes in.
Awesome, going to close this then (I just merged #10 in).
I worked around the issue in #8 , but I am now seeing another submodule missing:
I found that
seq2seq/csrc
contains the C++ code that I assume gets called under the hood, but I am not sure what I need to do to make the python interpreter find that as a module. Is there some code generation step that needs to happen or anything like that?