Closed seanlatias closed 4 years ago
This should solve #197.
@seanlatias I dont think this is the right solution. We need to generate error message here. Otherwise, it would become really confusing to users which compute is the first one and which is the second.
Do we allow usage of the same name across different kernels?
Even with this "fix", what would happen if the programmer forgets to rename the labels using X0 = kernel.X[0]. If one has to explicitly rename anyway, why don't them to simply change the original labels?
There will be errors if the users use kernel.X
directly because it returns a list instead of a tensor. So the users need to use at least kernel.X[0]
and kernel.X[1]
. They do not necessarily need to rename it to X0
and X1
.
We do allow to use the same name across different stages. In this case, there will be no problem. For example, the users need to use Stage1.X
and Stage2.X
to specify which X
in which stage they want to use.
I think we can give a warning instead of an error if the users use the same name in the same stage. Maybe they just don't bother creating a new name for each computation and they are not going to schedule them later.
This will result in confusion in general. I personally don't think this programming model has a clear semantic. Note that the name hint was just a "hint" and we need it only because the compiler has no way to know the output tensor name when parsing ops. In the case of duplicated name hints, we should just error out.
In this PR, we enable multiple computations to share the same name. When using it during scheduling, users need to specify which computation by using the index. For example,
More examples can be found in
test_api.py
undertests
.