Open stephenmm opened 7 years ago
We are only doing CI testing and building on Linux. The currently tested platforms are Linux and OSX (my development machine). The Linux CI builds are uploading from TravisCI. To provide windows CI builds, we can use AppVeyor. However, we are planning to address the windows support later once we get more of the basic feature done.
Hey - really fantastic project. I was just wondering if there was any update on windows support?
@sklam extremely needed support for windows
do you guys have windows support now>?
@SteffenRoe I asked that question 2 weeks ago in the RAPIDS-GoAi slack community and Keith Kraus and Mark Harris said that 1. if/when windows supports is added, that conversation would happen here, and 2. it's a big undertaking and they'd have to get window dev boxes. I'm also excited/anxious to try it out too, but I think for now, the best we can do (or at least what I've done) is up vote this request (to show interest in numbers) and subscribe to it (and remain hopeful). HTH
Whats the ETA for windows support?
Whats the ETA for windows support?
Would also like to know...
There is currently no plans for Windows support, if someone would like to try it and contribute fixes to enable Windows support we would be happy to support.
Hi, speaking of the devil, I am trying to compile Rapids CuDF in Windows 10 and I ran into some troubles. First of all my configuration:
When I run cmake .. -DCMAKE_CXX11_ABI=ON
inside the newly created build folder of cudf I have the following error :
`Determining if the CUDA compiler works failed with the following output:
Change Dir: /cygdrive/c/cudf/cpp/build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/make.exe cmTC_bd22e/fast /usr/bin/make -f CMakeFiles/cmTC_bd22e.dir/build.make CMakeFiles/cmTC_bd22e.dir/build make[1] : on entre dans le répertoire C:/cygdrive/c/cudf/cpp/build/CMakeFiles/CMakeTmp » Building CUDA object CMakeFiles/cmTC_bd22e.dir/main.cu.o "/cygdrive/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/bin/nvcc.exe" -x cu -c /cygdrive/c/cudf/cpp/build/CMakeFiles/CMakeTmp/main.cu -o CMakeFiles/cmTC_bd22e.dir/main.cu.o c1xx: fatal error C1083: Impossible d'ouvrir le fichier source : 'C:/cygdrive/c/cudf/cpp/build/CMakeFiles/CMakeTmp/main.cu' : No such file or directory main.cu make[1]: [CMakeFiles/cmTC_bd22e.dir/build.make:66: CMakeFiles/cmTC_bd22e.dir/main.cu.o] Error 2 make[1] : on quitte le répertoire C:/cygdrive/c/cudf/cpp/build/CMakeFiles/CMakeTmp » make: [Makefile:121: cmTC_bd22e/fast] Error 2`
... Well I am french sorry for the language barrier. But basically from what I understood, during the compilation process, nvcc has to compile a file created by compilation in cudf/build/CMakeFiles/CmakeTmp/main.cu but does not find it. Actually, I checked the folder is empty...
Would someone please help me ? Thanks in advance,
I'm unfortunately not familiar enough with cygwin to be able to help here, but the main.cu
is essentially a CMake test file to ensure that the CUDA compiler is working as expected before actually trying to tackle something in the real project.
@eidalex were you able to resolve the main.cu error?
cudf
doesn't support windowsAm I right that there is no way to use cudf
on windows?
Whats the status on windows support?
As of now cuDF does not support windows and there's currently no plans to support windows at this time. If WSL would support GPUs and CUDA that would be ideal for us as we could "just work".
Unfortunately we do not have the infrastructure or development expertise to support Windows, but if someone would like to explore compiling and running on Windows, we'd be more than happy to support.
@grv1207 I am sorry I did not have time to test again... I'll up someday but right now I have given up on the idea to use it on Windows...
Any update regarding the Windows support?
Any update regarding the Windows support?
There are still no plans to support Windows at this time.
Since information is now public, our plan for Windows support is to rely on WSL 2.0 which will support running CUDA and GPU computing.
You can see the announcement blog from Microsoft here: https://devblogs.microsoft.com/commandline/the-windows-subsystem-for-linux-build-2020-summary/#wsl-gpu
Rapids was my very first thought upon seeing the WSL 2.0 announcement earlier today :)
@kkraus14 any instructions on how to proceed to get this to work with CUDA and condas? Besides installing the update, what else do I need to do?
@kkraus14 any instructions on how to proceed to get this to work with CUDA and condas? Besides installing the update, what else do I need to do?
I don't believe the update is publicly available quite yet, you can track it here: https://developer.nvidia.com/cuda/wsl. Once it's available you'll basically have a full fledged linux installation so you can just use the normal conda installation commands that you normally would.
Here, showing Windows support too. I hope the update comes soon.
I believe the public beta is available now, instructions for setting up WSL 2 with CUDA support are available here: https://developer.nvidia.com/cuda/wsl
Once that's working you have a full fledged linux environment within Windows in which you can install and use RAPIDS
I did just this, but it appears the CUDA JIT compiler is not included at this point. See the #5 of the limitations here. I ran into this issue running the average tip example for cuDF.
I did just this, but it appears the CUDA JIT compiler is not included at this point. See the #5 of the limitations here. I ran into this issue running the average tip example for cuDF.
The CUDA JIT compiler should not be needed to run cuDF. We explicitly build for supported architectures and do not rely on runtime PTX compilation.
How did this issue manifest for you?
@jrhemstad nvRTC isn't supported in WSL2 CUDA yet it seems like which causes Jitify to blow up.
Here's what I did and the trace:
Python 3.7.7 (default, May 7 2020, 21:25:33)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import cudf, io, requests
...: from io import StringIO
...:
...: url = "https://github.com/plotly/datasets/raw/master/tips.csv"
...: content = requests.get(url).content.decode('utf-8')
...:
...: tips_df = cudf.read_csv(StringIO(content))
...: tips_df['tip_percentage'] = tips_df['tip'] / tips_df['total_bill'] * 100
...:
...: # display average tip by dining party size
...: print(tips_df.groupby('size').tip_percentage.mean())
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-31e0c4338384> in <module>
6
7 tips_df = cudf.read_csv(StringIO(content))
----> 8 tips_df['tip_percentage'] = tips_df['tip'] / tips_df['total_bill'] * 100
9
10 # display average tip by dining party size
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/site-packages/cudf/core/series.py in __truediv__(self, other)
1238
1239 def __truediv__(self, other):
-> 1240 return self._binaryop(other, "truediv")
1241
1242 def rtruediv(self, other, fill_value=None, axis=0):
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/contextlib.py in inner(*args, **kwds)
72 def inner(*args, **kwds):
73 with self._recreate_cm():
---> 74 return func(*args, **kwds)
75 return inner
76
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/site-packages/cudf/core/series.py in _binaryop(self, other, fn, fill_value, reflect)
1000 rhs = rhs.fillna(fill_value)
1001
-> 1002 outcol = lhs._column.binary_operator(fn, rhs, reflect=reflect)
1003 result = lhs._copy_construct(data=outcol, name=result_name)
1004 return result
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/site-packages/cudf/core/column/numerical.py in binary_operator(self, binop, rhs, reflect)
93 raise TypeError(msg.format(binop, type(self), type(rhs)))
94 return _numeric_column_binop(
---> 95 lhs=self, rhs=rhs, op=binop, out_dtype=out_dtype, reflect=reflect
96 )
97
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/contextlib.py in inner(*args, **kwds)
72 def inner(*args, **kwds):
73 with self._recreate_cm():
---> 74 return func(*args, **kwds)
75 return inner
76
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/site-packages/cudf/core/column/numerical.py in _numeric_column_binop(lhs, rhs, op, out_dtype, reflect)
432 out_dtype = "bool"
433
--> 434 out = libcudf.binaryop.binaryop(lhs, rhs, op, out_dtype)
435
436 if is_op_comparison:
cudf/_lib/binaryop.pyx in cudf._lib.binaryop.binaryop()
cudf/_lib/binaryop.pyx in cudf._lib.binaryop.binaryop_v_v()
RuntimeError: CUDA_ERROR_JIT_COMPILER_NOT_FOUND
Please let me know if there's any further information you'd like.
@jrhemstad nvRTC isn't supported in WSL2 CUDA yet it seems like which causes Jitify to blow up.
Ah, okay. That's a different statement than what is in the docs:
PTX JIT is not supported (so PTX code will not be loaded from CUDA binaries for runtime compilation).
@stevemarin I misunderstood what the docs were saying about the restriction. You are indeed hitting this limitation.
Just to update folks following this issue, the latest CUDA WSL beta now supports PTX JIT compilation so everything in cuDF (single GPU) should work in it. You can find the updated install instructions here: https://docs.nvidia.com/cuda/wsl-user-guide/index.html
I'm going to leave this issue open for the community to continue discuss native Windows support.
Does cuDF WSL support require a special developer preview version of Windows? Or does it work with any WSL2 instance in Windows?
Does cuDF WSL support require a special developer preview version of Windows? Or does it work with any WSL2 instance in Windows?
See here for requirements: https://docs.nvidia.com/cuda/wsl-user-guide/index.html#getting-started
The relevant part from the link kkraus14 posted is:
Note:
Ensure that you install Build version 20145 or higher.
You can check your build version number by running winver
via the Windows Run command.
I'd rather not run a version from Microsoft's "insider program" so based on previous Windows 10 releases:
https://docs.microsoft.com/en-us/windows/release-information/
I'm HOPING this coming May (2021) we'll see a version of Windows that meets the Build version 20145 or higher requirement without needing to run an "Insider Program" build.
Does cuDF WSL support require a special developer preview version of Windows? Or does it work with any WSL2 instance in Windows?
See here for requirements: https://docs.nvidia.com/cuda/wsl-user-guide/index.html#getting-started
Thanks. I am getting an AWS EC2 provisioned so my organization can use cuDF. I can't find anything that suggests that I can run RAPIDS on the Amazon Linux distribution. Can you confirm whether I can use RAPIDS on a machine running Amazon Linux?
Thanks. I am getting an AWS EC2 provisioned so my organization can use cuDF. I can't find anything that suggests that I can run RAPIDS on the Amazon Linux distribution. Can you confirm whether I can use RAPIDS on a machine running Amazon Linux?
Yes, RAPIDS works on every cloud. https://rapids.ai/cloud
Yes, that will work nicely: https://rapids.ai/cloud.html#AWS-EC2
@marlenezw wrote an excellent blog summarizing how to run RAPIDS on Windows using WSL2 here: https://medium.com/rapids-ai/running-rapids-on-microsoft-windows-10-using-wsl-2-the-windows-subsystem-for-linux-c5cbb2c56e04#cid=av01_so-twit_en-us
cudf
doesn't support windows- docker for windows doesn't support GPU
- WSL1 and WSL2 do not support GPU
Am I right that there is no way to use
cudf
on windows?
no, WSL2 support GPU, CUDF works fine but without full vram size, max 3gb from my rtx 2060/6gb
Is there any update/reconsideration on implementing RAPIDS in windows natively (not through WSL2)?
Hi @ManuGraiph and thanks for bringing this up. There are still no plans to support Windows natively. WSL2 is the recommended way to use RAPIDS on Windows. We realize that's not ideal for everyone but we don't currently have the resources to develop on, test on, or support native Windows.
Thanks for the answer! How come cuPy is implemented under windows (and on pip) but not cuDf?
CuPy is a project with a significantly different architecture and dependencies. And while we work very hard to ensure that CuPy and cuDF work smoothly together, they are developed and tested by different teams, and on different test infrastructures.
This is still very much an issue where contributions would be greatly appreciated. If anyone would like to try and build cuDF on Windows and fix the resulting issues, that would be a great start. You can always reach out on our Slack for support!
I am posting a gist of my attempt at libcudf native windows compilation (Dec 2021) here. This will be useful for developers attempting native windows build.
I use commit 5e2aaf9d25c for compilation I will commit this to separate branch. but there are lot of points that are outside of cudf, that are required to enable compilation.
In Cmake installation in Windows, patch.exe was missing, so I installed cygwin and installed cmake. (added to PATH) (patch.exe required for thrust patch in cudf).
set paths
set PATH=%PATH%;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin\
setx CUDAToolkit_ROOT "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2"
set PATH=%PATH%;C:\cygwin64\bin
conda activate cudf_dev
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64
cmake -G "Unix Makefiles" -DBUILD_TESTS=OFF -DBUILD_BENCHMARKS=OFF -DCMAKE_BUILD_TYPE=Release -DCMAKE_CUDA_ARCHITECTURES=70 ..
Comment out cudftestutil
build part (This should be inside BUILD_TESTS
flag)
Note: gtest, gmock find package failed. So, can't build tests.
Add
list(APPEND CUDF_CUDA_FLAGS -Xcompiler=-Wall,-Werror,-Wno-error=deprecated-declarations)
list(APPEND CUDF_CUDA_FLAGS -Xcompiler=-rdynamic)
to disable max, min macros, add -DNOMINMAX
in CXX flags
rmm MACROs has issue with windows. remove these macro usages, or simplify macros to allow msvc to compile. Issue could be due to __VA_ARGS__
. (will commit the patch in new branch).
nvcomp compiles in C++14 by default. So, set its standard to C++17 in its own cmakelists.txt inside _deps/nvcomp-src
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CUDA_STANDARD 17)
I have listed only few of the issues here. Lot more issues are faced and hacks need to done to proceed with compilation.
In _deps\libcudacxx-src
“libcxx\include“ softlink of linux does not work. So copied folder.
Jittify did not work. Disabled in cmakelists.txt
not, and, or - keyword-like forms logical operators
#include <iso646.h> or <ciso646>
For some reason, some of the enums did not work. My guess is they might be macros. They had to be renamed.
For example, gather_bitmask_op::PASSTHROUGH2
, io_type ::VOID2
, concatenate_null_policy ::IGNORE2
, parse_result:: ERROR2
replace std::nan(
with nan(
Bug in MSVC – enable_if_t<constexpr_function()> error. So, converted them to “if constexpr else” blocks. Lot of manual work. (update: looks like the issue is fixed internally in MSVC, fix could be coming in future release)
replace
__builtin_bswap32
with _byteswap_ulong
__builtin_bswap64
with _byteswap_uint64
cmath issue: constexpr function are not mandatory for std library in C++17. so, cmath does not have constexpr.
most device functions in cudf uses these math functions, with --expt-relaxed-constexpr
flag.
So, Added a hack to C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include\cmath
#if _HAS_CXX17
#define C17CONSTEXPR constexpr
#else
#define C17CONSTEXPR
#endif
And added C17CONSTEXPR
to macros _GENERIC_MATH1_BASE
and _GENERIC_MATH2_BASE
few linux headers disabled in windows.
Sort, Stable sort took the most time and memory to compile! Compiling separately for 1.5 hours each. Each taking more than 51 GB peak memory!
delete contents of fatbin.ld
get the link command by verbose compilation and link manually.
dl.lib not found (conda doesn't have it). so remove from command.
Also, I didn't know how to use dll in other programs. So, I linked cudf to .lib instead of .dll. Use lib.exe
Since many symbols are missing, Use /FORCE:UNRESOLVED
for undefined symbols.
cudf.lib >2GB and it did not detect during further example compilations. So, I opened objects.rsp
and removed large object files such as reductions/*.cu.obj
and then generated cudf.lib
Tested with simple sequence column, cudf::repeat
, sum aggregation - both reduction and groupby. The test code worked!
This report is the tip of the iceberg. You will face a lot more issues while compiling. I have more notes and hints that I took during compilation. Also, many part of the code were disabled, so to get full functionality of libcudf, a lot more work is required to enable MSVC to compile the host code. Besides this, nvcc compilation resulting device ptx were different in Windows (often bigger). So, the code performance will likely be different from linux version that we currently support.
Hi guys,
So is there another/branch where someone is developing native windows, I would love to contribute to it! We need cudf at our organisation however we can't use WSL so this would be priority for us.
@niltecedu seems there is not too much love here for win10/11... since 2022 crickets...
@brandonhaynes
I use Linux at work but at home I have windows and would like to be able to run it on my main machine with conda. Currently when I run:
conda env create --name pygdf_dev --file conda_environments/testing_py35.yml
I am seeing this error:
I am hoping that this could be easily added to the win-64 channels?