liong-t / aiap12_group1_sharing

AIAP Sharing Project Topic
0 stars 0 forks source link

aiap12_group1_sharing

AIAP Sharing Project Topic

Ivan's deadlines:

  1. Complete the spreadsheet by 12 Noon of 21/03/2023
  2. Provide link(s) to your material for our review by 12 noon of 29/03/2023

MILESTONES:

20/3 Assign applications to engineers to do code snippets.\ 21/3 Choose topic\ 23/3 Do code snippets.\ 23/3 Begin writing article.\ 29/3 Submit resources, code samples, code walkthrough for article (for Ivan's review)\ 30/3 Start preparing presentation\ 10/4 Presentation of group sharing.

GOALS:

Write an article (in markdown file) to be published on Medium / Epoch (AISG's forum) on your topic of interest.
Create code samples on GitHub for the code walkthrough in your article.
Present the article to batchmates and engineers.

Research Links

How does Stable Diffusion work?

How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile, 17:49 video

Stable Diffusion in Code (AI Image Generation) - Computerphile, 16:59 video

YouTube - How does Stable Diffusion work? – Latent Diffusion Models EXPLAINED

Encoder Decoder What and Why ? – Simple Explanation

What is Attention Mechanism in Deep Learning ? – Quickly Understand

What Is a Transformer Model?

The Transformer Model\ Machine Learning Mastery, contains math.

Transformer models: an introduction and catalog

Jeremy Howard — The Simple but Profound Insight Behind Diffusion, 1:12:57

At 3:13:\ It's a simple but profound insight. Which is that it's very difficult for a model to generate something creative, and aesthetic, and correct from nothing...or from nothing but a prompt to a question, or whatever. The profound insight is to say, "Well, given that that's hard, why don't we not ask a model to do that directly? Why don't we train a model to do something a little bit better than nothing? And then make a model that — if we run it multiple times — takes a thing that's a little bit better than nothing, and makes that a little bit better still, and a little bit better still." If you run the model multiple times, as long as it's capable of improving the previous output each time, then it's just a case of running it lots of times. And that's the insight behind diffusion models.

GitHub - Stable Diffusion from Scratch (abandoned project)

Article sections:

Title:\ Stable Diffusion with Hugging Face API

1) Intro (JF) 2) What is Stable Diffusion (JH) 3) API Walkthrough (Code should be here)
https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb
a) Model Explanation (JF)
b) Schedulers (JF)
c) forward + backward diffusion -> latent encode , decode (YL)

4) Application -> Text to imageOutput Demo (Shu Ying) 5) End Notes (YL) 6) References 7) Footnotes

References

  1. Stable Diffusion with 🧨 Diffusers
    https://huggingface.co/blog/stable_diffusion

  2. The Stable Diffusion Guide 🎨
    https://huggingface.co/docs/diffusers/stable_diffusion

  3. Introducing Hugging Face's new library for diffusion models
    https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb

example:

https://epoch.aisingapore.org/2022/11/how-to-use-grad-cam-to-interpret-your-convolutional-neural-network/