The first one is to use a very standard and small image dataset called CIFAR10. It is composed of 60000 32x32 RGB images. Below some image examples:
This dataset only has 10 different classes of images but there is also a version with 100 classes.
The second idea that I got is to use another neural network to generate synthetic images. This model is pretty good at generating realistic images for certain classes (see below).
In my opinion using this model could lead to some pretty interesting results but can also be quite costly as we would have to generate a big dataset of images from the class we want to work with before starting the evolutionary process. This model is already implemented and available with pretrained weights in both TensorFlow and PyTorch
The third option is to choose a request and automatically download image results from google images using a random API interface.
I thought of three main options.
The first one is to use a very standard and small image dataset called CIFAR10. It is composed of 60000 32x32 RGB images. Below some image examples:
This dataset only has 10 different classes of images but there is also a version with 100 classes.
The second idea that I got is to use another neural network to generate synthetic images. This model is pretty good at generating realistic images for certain classes (see below).
In my opinion using this model could lead to some pretty interesting results but can also be quite costly as we would have to generate a big dataset of images from the class we want to work with before starting the evolutionary process. This model is already implemented and available with pretrained weights in both TensorFlow and PyTorch
The third option is to choose a request and automatically download image results from google images using a random API interface.
What do you think?