Closed Oufattole closed 2 months ago
This was resolved in pull request #3 , the pytorch dataset class was given an enum for event_stream type collating (which was previously supported) and for triplet style collating (which is depicted in the image above and was just added!).
I'm trying to implement this observation level embedder (as described in the figure below) to convert the pytorch dataset batches to a sequence of embeddings, one for each observation, with static variables preceding dynamic variables.
Where do I find the vocab size in the pytorch dataset class?