Closed rbavery closed 1 year ago
Found 1 changed notebook. Review the changes at https://gitnotebooks.com/developmentseed/tensorflow-eo-training-2/pull/14
Even in the case of single date imagery, we are converting a sequence of bands to a a sequence of length one.
Redundant "a"
Respond and view the context here.
LSTMs introduced the concept of "state" to RNN computations, termed the "memory block".
Do you mean to say that LSTMs introduced states in general, or just the one called a memory block?
Respond and view the context here.
prudent to be aware of other approaches that can work with limited labeled datasets
Maybe highlight U-Net as an example of an arch that performs well on limited data
Respond and view the context here.
Excellent lesson @rbavery it's a perfect balance of big picture and descriptive. Just a few minor suggestions but otherwise looks good to merge!
This addresses https://github.com/developmentseed/servir-amazonia-2-internal/issues/9, part of https://github.com/developmentseed/servir-amazonia-2-internal/issues/10, and also part of https://github.com/developmentseed/servir-amazonia-2-internal/issues/3 by providing background on vision transformers (ViT)), variants of ViT, and SAM. It also covers background on RNNs, mostly to highlight that folks should steer clear.
Next to-do:
If time allows: