Open ajcwebdev opened 2 days ago
receiving HTTP error! status:500
I thought it may be due to not having ollama and whisper installed. Ollama installed ✅ whisper not working. receiving error
Using cached openai-whisper-20240930.tar.gz (800 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [25 lines of output]
<string>:5: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
Traceback (most recent call last):
File "/Users/jennjunod/Desktop/code/venv/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
~~~~^^
File "/Users/jennjunod/Desktop/code/venv/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jennjunod/Desktop/code/venv/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/private/var/folders/n3/0th85985751f2pxg3z50n4mw0000gn/T/pip-build-env-34blidbs/overlay/lib/python3.13/site-packages/setuptools/build_meta.py", line 334, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/n3/0th85985751f2pxg3z50n4mw0000gn/T/pip-build-env-34blidbs/overlay/lib/python3.13/site-packages/setuptools/build_meta.py", line 304, in _get_build_requires
self.run_setup()
~~~~~~~~~~~~~~^^
File "/private/var/folders/n3/0th85985751f2pxg3z50n4mw0000gn/T/pip-build-env-34blidbs/overlay/lib/python3.13/site-packages/setuptools/build_meta.py", line 522, in run_setup
super().run_setup(setup_script=setup_script)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/n3/0th85985751f2pxg3z50n4mw0000gn/T/pip-build-env-34blidbs/overlay/lib/python3.13/site-packages/setuptools/build_meta.py", line 320, in run_setup
exec(code, locals())
~~~~^^^^^^^^^^^^^^^^
File "<string>", line 21, in <module>
File "<string>", line 11, in read_version
KeyError: '__version__'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.```
It looks like it's trying to use openai-whisper
, not whisper.cpp
. What is your output from npm run setup
?
I'll try starting it from scratch and removing the clone I already had
> setup
> bash ./scripts/setup.sh
.env file already exists. Skipping copy of .env.example.
yt-dlp could not be found, refer to installation instructions here:
https://github.com/yt-dlp/yt-dlp/wiki/Installation
Ollama is not installed, refer to installation instructions here:
https://github.com/ollama/ollama
added 1 package, and audited 338 packages in 587ms
65 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
whisper.cpp directory already exists. Skipping clone and setup.
Setup completed successfully!```
Okay, run the following:
brew install yt-dlp ollama
And then lets just test the CLI before messing with the frontend or backend:
npm run as -- --video "https://www.youtube.com/watch?v=MORMZXEaONk"
Let me know if that works or give me the error if it doesn't.
Link to the error found https://app.warp.dev/block/htpqF1yS3QnKIHSxnNQpce
Okay good, one more brew install:
brew install ffmpeg
And then try again.
It works 🎉
Fantastic! You don't need to worry about this right now but I opened a separate issue (#43) to capture some of these problems we ran into so I can improve the setup script.
Problem
The repo is currently organized like so:
src
folder containing all the CLI logic.packages
directory with three sub-directories:server
containing a Fastify backend that imports the CLI logic and sets up routes for each major CLI flag (/video
,/rss
, etc.).web
containing a React frontend that interacts with the Fastify server.astro
containing an Astro site with a preconfigured content collection that matches Autoshow's output.Right now, when you generate a show note with the React frontend, it displays as plain text. If you want to see the markdown rendered, you have to move the files to the
astro
package and start the Astro site separately.Goal
The big goal is to eventually get rid of the
packages
directory entirely, but the goal of this issue specifically is to combine theweb
andastro
directories into a single integrated frontend. Since Astro supports React, the logic should mostly transfer over.Instructions for Development
Clone repo and setup Whisper and a local LLM model:
Start server (Node v22 required):
Keep the terminal running and open another to start the frontend:
Open localhost:5173 to see the React app. Include a YouTube video for testing (I use
https://www.youtube.com/watch?v=MORMZXEaONk
), select Whisper.cpp for transcript, Ollama for the LLM, and any prompt.Open another terminal for the Astro site:
Open localhost:4321 to see the Astro site.