Just as the project is set up in my test repo, I need to integrate a FastAPI route proxy in the
Tasks
[x] Modify next.config.js to remap the API routes to the root /api directory
[x] Add index.py (which will be the python serverless function - try to keep all in the same file to reduce cold starts)
[x] Add requirements.txt and the packages needed for the Python environment for FastAPI server (humblDATA)
[ ] Modify package.json to run both servers using concurrently when running pnpm dev
Notes
If it becomes difficult to integrate humblDATA as a requirement, consider moving to a serperate repo for FastAPI server that is hosted on Vercel serverless functions. Then the app can take on a larger more complex strucutre and use poetry for package management. GIven how few functions should be required to map from humblDATA, the /api ROUTE method should suffice. There shouldn't be database interactions using FastAPI becuase I think itll be faster using JS and supabase
Initial thoughts surrounding how to process and save historical Mandelbrot Channel are to use .parquet files. This file type is read/wrote v. quickly via Polars (the main data analytics engine in humblDATA). So when manipulations are done via Python and humblDATA, the data will be sent back to JS as a .parquet file, with the filename a unique hash of the command arguments, so that if those areguments are passed again, that file will be retireived and the latest date will be extracted so the calculation only runs from the latest date. I am not opposed to the FastAPI server reading/writing from the supabase databasse directly if it does not add too much complexity to the overall FastAPI server, and if the speed is comparable to using JS.
Main Goal
Just as the project is set up in my test repo, I need to integrate a FastAPI route proxy in the
Tasks
next.config.js
to remap the API routes to the root/api
directoryindex.py
(which will be the python serverless function - try to keep all in the same file to reduce cold starts)requirements.txt
and the packages needed for the Python environment for FastAPI server (humblDATA)package.json
to run both servers usingconcurrently
when runningpnpm dev
Notes
If it becomes difficult to integrate humblDATA as a requirement, consider moving to a serperate repo for FastAPI server that is hosted on Vercel serverless functions. Then the app can take on a larger more complex strucutre and use poetry for package management. GIven how few functions should be required to map from humblDATA, the
/api
ROUTE method should suffice. There shouldn't be database interactions using FastAPI becuase I think itll be faster using JS and supabaseInitial thoughts surrounding how to process and save historical
Mandelbrot Channel
are to use.parquet
files. This file type is read/wrote v. quickly via Polars (the main data analytics engine in humblDATA). So when manipulations are done via Python and humblDATA, the data will be sent back to JS as a.parquet
file, with the filename a unique hash of the command arguments, so that if those areguments are passed again, that file will be retireived and the latest date will be extracted so the calculation only runs from the latest date. I am not opposed to the FastAPI server reading/writing from the supabase databasse directly if it does not add too much complexity to the overall FastAPI server, and if the speed is comparable to using JS.Resources
next.config.js
package.json