Hi, I have a metalearning problem where all inputs and outputs are 2D images. I'd like to use the wonderful deep GP package you have developed. Based on the MNIST 2D example, it seems your code converts 2D images to 1D vector and then sample pixels as contexts. In my problem, I'd like to use past image frames as contexts. In other words, I'd like to exploit autocorrelation in the target variable. Is this possible? If so, how about scalability, my image is typically 360x180. I've done this using maml and CNN, but the results are not satisfactory.
Hi, I have a metalearning problem where all inputs and outputs are 2D images. I'd like to use the wonderful deep GP package you have developed. Based on the MNIST 2D example, it seems your code converts 2D images to 1D vector and then sample pixels as contexts. In my problem, I'd like to use past image frames as contexts. In other words, I'd like to exploit autocorrelation in the target variable. Is this possible? If so, how about scalability, my image is typically 360x180. I've done this using maml and CNN, but the results are not satisfactory.