Closed TNTran92 closed 2 years ago
Welcome to Apache MXNet (incubating)! We are on a mission to democratize AI, and we are glad that you are contributing to it by opening this issue. Please make sure to include all the relevant context, and one of the @apache/mxnet-committers will be here shortly. If you are interested in contributing to our project, let us know! Also, be sure to check out our guide on contributing to MXNet and our development guides wiki.
RTX 3060(Ampere Architecture) don't support CUDA 10.1.
https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html
I see. In that case, is there mxnet > 1.8.0 for windows? I saw there is 1.9.0 with CUDA 11.2 support, but that is for Linux https://pypi.org/project/mxnet-cu112/
@josephevans Does MXNet have 1.9.0 wheels (support CUDA 11) for windows?
I have the same problem. Nothing happens after I run the test: x = nd.ones((3,4), ctx=gpu()), even though my RTX 3070 is recognized.
Confirmed that on Windows, MXNet only support up to mxnet-cu102. The issue is that Cuda 10.2 is only supported up to Turing (20-series and older). Also confirmed that mxnet-cu102 run seemlessly on GTX 1650.
After spending a few days trying to build it from source, I have given up on getting version 1.9 working on Windows. I installed Ubuntu and is trying to get mxnet-cu112 to work. For those interested, CUDA 11.2 (and compatible Cudnn) only supports up to Ubuntu 20.04
Confirmed mxnet-cu112 works on RTX 3060 in Ubuntu 20.04. As of now, looks like using Ubuntu is the only way to get 30 series card to work.
Confirmed mxnet-cu112 works on RTX 3060 in Ubuntu 20.04. As of now, looks like using Ubuntu is the only way to get 30 series card to work.
All right. Thanks for your information
Description
Hello all, I am new to MXNet, so I am trying to have it installed and run on GPU. I am trying to install mxnet-cu101 to run on an RTX 3060. However, as soon as run the lines below, the code stalls. Nothing happens after that.
from mxnet import nd, gpu, gluon, autograd from mxnet.gluon import nn from mxnet.gluon.data.vision import datasets, transforms import time
Note that if I take out ctx=gpu(), it works just fine.
[[1. 1. 1. 1.] [1. 1. 1. 1.] [1. 1. 1. 1.]] <NDArray 3x4 @cpu(0)>
Also, it seems MXNet recognize there is a GPU
Question: 1/ What am I doing wrong? 2/ Is there a comprehensive guide on how to install mxnet for 30 series gpu on Windows in conda?
Below is my specs
CUDA 10.1 and CuDNN 8.0.5.39 mxnet-cu101 Geforce Game Ready Driver: Version 512.77
Device name MyPC Processor AMD Ryzen 9 5900X 12-Core Processor 3.70 GHz Installed RAM 32.0 GB System type 64-bit operating system, x64-based processor Pen and touch Pen support
Edition Windows 11 Pro Version 21H2 Installed on 1/2/2022 OS build 22000.675 Experience Windows Feature Experience Pack 1000.22000.675.0