Closed ValeriiBaidin closed 4 years ago
Thanks for bringing this to my attention!
So it looks like what's happening is that because you've already set your model to be a GPU model, the @gpu
macro is redundant. The purpose of @gpu
is to allow you to always simply instantiate a normal CPU model, and then decide at run time if you want to train it on the GPU, thus it doesn't recognize GPU models directly. Either of the following will work for your example,
model = CTM(corp, 10)
@time @gpu train!(model, tol=0)
model = gpuCTM(corp, 10)
@time train!(model, tol=0)
However there's no downside to allowing for this redundancy, so I'll go ahead and add functionality to the @gpu
macro to allow it to accept GPU models.
fixed.
Thank you so much. !!! it's grate.
What about this model, Unfortunately it doesn't work:
model = fCTM(corp, 10)
@time @gpu train!(model, tol=0)
Unfortunately the fCTM
model doesn't currently support GPU acceleration.
I may at some point be able to implement it, but currently the quality of the output is not good. It may have something to do with the 32 bit precision, or the probabilistic model itself may need to be reworked.
It would also require designing a parallel algorithm for the updating of tau to really take advantage of the GPU acceleration.
I've tried the code:
The code finished immidealty, showtopics(model, 20, cols=9) shows the same result for all topics.
At the same time, next code works as expected.