Closed SH-2093 closed 1 week ago
Does the command torch.cuda.is_available()
returns True in your environment ?
Does the command
torch.cuda.is_available()
returns True in your environment ?
yes, the cuda environment is prepared. I have no idea about how to pinpoint the issue.
this is the tabnet related part in the code,
Why did you close this issue ? Did you solve your problem?
Why did you close this issue ? Did you solve your problem?
sorry, i didn't solve it. close the issue by mistake click.
what is the size of your dataset? what batch size are you using? the smaller the dataset and the smaller the batch the smaller the impact of your GPU on computational speed.
what is the size of your dataset? what batch size are you using? the smaller the dataset and the smaller the batch the smaller the impact of your GPU on computational speed.
The size of the training dataset is approximately 600,000 samples, with each sample containing 256 features. the batch size is the default value (1024). In actually, I did not perform the operation of transferring the dataset to the GPU , according to the input parameter of the fit function in document is a numpy-type variable. I am not sure if this is the cause of the problem.
no everything should be done internally, you may want to increase the batch size to see if you see an imrpovement in speed with gpu against cpu. It's diffocult to help you as the cpu vs gpu version has been here for a long time and is working for sure, so I don't know what is going wrong with your setting.
no everything should be done internally, you may want to increase the batch size to see if you see an imrpovement in speed with gpu against cpu. It's diffocult to help you as the cpu vs gpu version has been here for a long time and is working for sure, so I don't know what is going wrong with your setting.
thank u very much for the help. I will comment here if i make any progress.
The environment used is cuda + Linux. When I changed the configuration of the device from "CUDA"/"GPU" to "CPU"(the method I use is <clf = TabNetRegressor(device_name = "cuda")> ) it seems that there was no change in the time consumption.![device_trans](https://github.com/dreamquark-ai/tabnet/assets/40696682/39cc0299-5f26-4274-bb55-8915b4953e0f)
I installed the environment package for pytorch-tabnet using the command conda install pytorch-tabnet, rather than obtaining it through Linux compilation. I’m not sure if this is the reason for the issue.