developmentseed / tensorflow-eo-training-2

A workshop taught in 2023 for NASA SERVIR, ACCA, and members of other environmental organizations in South America
https://developmentseed.org/tensorflow-eo-training-2/
Apache License 2.0
11 stars 8 forks source link

RNNs and Transformers #14

Closed rbavery closed 1 year ago

rbavery commented 1 year ago

This addresses https://github.com/developmentseed/servir-amazonia-2-internal/issues/9, part of https://github.com/developmentseed/servir-amazonia-2-internal/issues/10, and also part of https://github.com/developmentseed/servir-amazonia-2-internal/issues/3 by providing background on vision transformers (ViT)), variants of ViT, and SAM. It also covers background on RNNs, mostly to highlight that folks should steer clear.

Next to-do:

If time allows:

gitnotebooks[bot] commented 1 year ago

Found 1 changed notebook. Review the changes at https://gitnotebooks.com/developmentseed/tensorflow-eo-training-2/pull/14

lillythomas commented 1 year ago

Even in the case of single date imagery, we are converting a sequence of bands to a a sequence of length one.

Redundant "a"

Respond and view the context here.

lillythomas commented 1 year ago

LSTMs introduced the concept of "state" to RNN computations, termed the "memory block".

Do you mean to say that LSTMs introduced states in general, or just the one called a memory block?

Respond and view the context here.

lillythomas commented 1 year ago

prudent to be aware of other approaches that can work with limited labeled datasets

Maybe highlight U-Net as an example of an arch that performs well on limited data

Respond and view the context here.

lillythomas commented 1 year ago

Excellent lesson @rbavery it's a perfect balance of big picture and descriptive. Just a few minor suggestions but otherwise looks good to merge!