Open huangteng opened 3 years ago
Hi thank you for your message!It's tvm b3b2705 and xgb 1.4.0
Please let me know if that helps:)
Hi I have been able to reproduce this issue with the simplest demo code, could you please help to take a look at this post on TVM discuss forum and run the sample code that I have uploaded ? Just to make sure that we can reproduce the issue in the same way and have the same understanding. https://discuss.tvm.apache.org/t/backtrace-really-basic-code-triggers-autotvm-exception/9750
Sorry for closing it by fault, I will also try with the version you mentioned (may take some time). Just to mention that I would like to test whether this dynamic indexing is supported during autoTVM tunning on LLVM x86 platform instead of CUDA. Will be really appreciated if you could try running my simple example on the above post.
Hi thank you for your message!It's tvm b3b2705 and xgb 1.4.0
Please let me know if that helps:)
I can still reproduce the issue with above versions, with the simple code on the above tvm discuss post
Hi! Sorry for the delay of reply as I was looking into this problem. This is because autotvm uses random input when auto tuning. When the random tensor is used as indices, it causes out-of-bound access for your task. You need to add your task here to set customized input to prevent this issue:
https://github.com/apache/tvm/blob/main/python/tvm/autotvm/measure/measure_methods.py#L584-L587
Thanks for the hint, indeed the logic enters the condition that scatter is not in the measure_input.task.name. But how to set the customized input in the tune task ? I searched "scatter" in the doc but it is not that obvious. Could you please share a sample code piece ? thanks a lot.
Seems like this feature is missing in the api... could you please try to modify that if scatter statement?
Seems like this feature is missing in the api... could you please try to modify that if scatter statement?
Yes, by commenting that random_fill part will workaround this issue ... But here comes 2 problems:
In your case, args[1] is the index tensor. Instead of random fill, you need to generate a random int32 tensor (using numpy.random.randint) and use the shape in build_result.arg_info as the upper bound of random number so that it will not cause out-of-bound access. I don't quite remember my case but I think I did the same thing.
In your case, args[1] is the index tensor. Instead of random fill, you need to generate a random int32 tensor (using numpy.random.randint) and use the shape in build_result.arg_info as the upper bound of random number so that it will not cause out-of-bound access. I don't quite remember my case but I think I did the same thing.
By the way, my simple code is just used to reproduce error, what I am doing is auto tune a sparse-like computation.
ok, by shape of index might not always be the case, especially when there are more than one tensor are involved into the dynamic compute(ex. my sparse convolution case). Is there a way to pass the pre-customized tensor value to the tunner task ? (Or any other method to pass the value from the higher API level)
I don’t think there are such api in autotvm, you can save precomputed tensors to a file, and load it in measure_method.py. They do things similar in auto scheduler, but it is missing in autotvm :(
Hi I tried to run your example on x86 platform (I simplified your example and changed target to "llvm"), however the autoTVM run failed and poped out low level backtrace, like below:
So I really appreciated if you could share some detailed information or any debugging tips:
for what I have tried, this small piece of code could trigger the above issue, it seems that if indexing a value from another tensor inside this tensor, the problem happens.