Open nelsonic opened 12 months ago
@LuchoTurtle when you feel the repo is βreadyβ for wider publication, please letβs get on a call and go through what is needed for recording the video with the highest possible production quality (image & sound). forget about applying for jobs and doing pointless interview prep when there a thousand other candidates. Unless you personally know the hiring manager, the only way for them to GET to βknowβ you, your abilities/skills and communication is by recording a Video. Itβs the reason people are required to submit a video as part of their application to YCombinator https://www.ycombinator.com/video.html or Founders & Coders. once the video is published you share it on your LinkedIn and set your profile to βavailable for hireβ and let the recruiters come to you!
Before tackling this issue, I need to get #18 sorted first and then change the frontend afterwards (haven't created an issue for this, will depend on how #18 is implemented). Only then I'll start recording a tutorial.
But #18 is quite a tall task to have it fully documented and tested. Though I feel it is close to being completed, there's testing that still needs to be done. Having semantic search will make this repo super duper useful imo :)
Audio-to-text (#18) was never the purpose of this repo. It's an image classifier not a speech-to-text demo. This is the definition of "feature creep". Not saying that this isn't a nice-to-have feature, just definitely not something that we need. I would much prefer to split the audio-to-text out into a separate repo to avoid complicating this one. I should have made that clearer back in November... β³ π I thought it was just going to be a "quick" addition. But it's really not ... π
Please completely ignore audio-to-text completely for the purposes of the Video Tutorial. The audio-to-text should not be on the home page of this app. It can be on a dedicated page as a separate demo. But it's not the focus.
Semantic search is useful, yes. But only if the person uploading an image doesn't get the desired classification. The whole point of this project was to allow us to upload an image and have it classified. That's it. Anything else is certainly a "bonus" but not the focus.
Avoiding recording the video because a nice-to-have non-core feature isn't done yet is like people who procrastinate on fitness because they don't have the "right" water bottle. π€¦ββοΈ
Just get the the video done for the "baseline" features so that the video is a brief as possible. At the end of the video you can take 10 seconds to describe the "advanced" features (Semantic Search + Audio-to-Text) and ask people if they want a follow-up video walking through them.
@LuchoTurtle as discussed verbally, we're going to showcase this repo using
Video
.Once we've tidied up the repo and landing page a bit #7 + #22 β¨ and added the DB (can be
SQLite
) to save metadata + classification #3 π We should plan to create aVideo
Tutorial of this project. π₯Todo
[ ] Ensure that the instructions in the
README.md
are fully up-to-date π[ ] Setup your desktop/environment to minimise distraction e.g. don't use your
Chrome
[ ] Follow the example by
Code to the Moon
: https://www.youtube.com/@codetothemoon/videos e.g: https://youtu.be/jib1wjgIaa4 we have all the equipment necessary to have a similar aesthetic. π[ ] Record your screen at
1080p
(e.g: connect to an external1080p
monitor and record that) πΊ[ ] Re-Create the Project from scratch following the instructions in the
README.md
π[ ] Add the code as per the
README.md
instructions π§βπ»[ ] Get the project to the "Wow" moment of classifying an uploaded image π€©
[ ] Speed run Fly.io deployment (don't focus on this part ...)
[ ] Transfer the captured screen recording to the
Mac Mini
forDaVinci Resolve
Editing π¬ (happy to pair with you on this ...) using the speed editor/keyboard.[ ]
Edit
theVideo
Tutorial to be as tight as possible (speed up any downloads) and[ ] Publish the Tutorial to the
@dwyl
YT channel. π