An Industrial Think Tank Focused on Developing and Promoting AI Technology for Geospatial Applications [Please note this group was formed as an academic exercise for educational purposes and does not represent a real world organization]
MIT License
7
stars
3
forks
source link
Replicate a computer vision model from the keras tutorials #150
These karas tutorials illustrate some pretty amazing functionality in the library. Specifically relevant to us is the section of computer vision examples. Some of these examples can be quite long, honestly a bit over complex, and may require some additional knowledge to navigate though. Some examples however are easier to follow including the MNIST Covnet (a classic example using convolutional neural networks to identify handwritten numbers). If we peek around the site a bit more, we can find some great examples of image generation using deep learning models as well.
Using these guides - you should be able to replicate these core techniques for deep learning with imagery. You should be able to run these example code in google colab. What's more - in google colab - we can explore the data inputs more - and even replace them with our own inputs. _Note: this will most frequently require that you reshape or resize your images so that they match the inputs in the example - OR - you may change a shape of the model. Warning - big models tend to be slow or crash!_
A very cool - if not disorienting - example output from of a variationally auto encoder trained with MNIST data using keras.
Your mission: Select an example deep learning technique illustrated in the computer vision or generative deep learning code examples linked above. Attempt recreating the functionality in google colab and report back on your experience. You may want to explore beyond the example by investigating the input data or testing the model in various way. Optionally: you may consider using the data available from google maps and google satellite using the aestheta.core.getTile() function to attempt to generate your own training data.
https://keras.io/examples/vision/
These karas tutorials illustrate some pretty amazing functionality in the library. Specifically relevant to us is the section of computer vision examples. Some of these examples can be quite long, honestly a bit over complex, and may require some additional knowledge to navigate though. Some examples however are easier to follow including the MNIST Covnet (a classic example using convolutional neural networks to identify handwritten numbers). If we peek around the site a bit more, we can find some great examples of image generation using deep learning models as well.
Using these guides - you should be able to replicate these core techniques for deep learning with imagery. You should be able to run these example code in google colab. What's more - in google colab - we can explore the data inputs more - and even replace them with our own inputs. _Note: this will most frequently require that you reshape or resize your images so that they match the inputs in the example - OR - you may change a shape of the model. Warning - big models tend to be slow or crash!_
A very cool - if not disorienting - example output from of a variationally auto encoder trained with MNIST data using keras.
Your mission: Select an example deep learning technique illustrated in the computer vision or generative deep learning code examples linked above. Attempt recreating the functionality in google colab and report back on your experience. You may want to explore beyond the example by investigating the input data or testing the model in various way. Optionally: you may consider using the data available from google maps and google satellite using the aestheta.core.getTile() function to attempt to generate your own training data.