I wonder if it’s possible to have a GPU native implementation, with the tree stored as 1-hot vectors, the dimension given by the number of total operators + value + feature + degree. You would evaluate all operators at each node in the tree, and mask the outputs not used.
However I’m not sure this would actually work, because with deeply-nested trees you would have to have O(2^n) evaluated nodes for a depth of O(n), whereas dynamic expressions would just be n evaluations.
I wonder if it’s possible to have a GPU native implementation, with the tree stored as 1-hot vectors, the dimension given by the number of total operators + value + feature + degree. You would evaluate all operators at each node in the tree, and mask the outputs not used.
However I’m not sure this would actually work, because with deeply-nested trees you would have to have O(2^n) evaluated nodes for a depth of O(n), whereas dynamic expressions would just be n evaluations.